You are here: Tutorials
Tuesday, 2017-09-19

Sponsors



Gold Corporate Support


Corporate Support

Hands on DiSL: The Java Bytecode Instrumentation Framework

Time: April 21, 10:30 - 12:00

Short description:

DiSL is an Java bytecode instrumentation framework developed in cooperation between University of Lugano, Charles University in Prague and Shanghai Jiao Tong University. DiSL language is inspired by AOP principles but tailored for instrumentation used in dynamic program analysis. The features, like no additional allocations and transparent code handling, make DiSL an excellent assistant for performance evaluation. The tutorial will explain basic features of DiSL presented on simple instrumentation task. The audience is encouraged to bring their own laptops as the tutorial will include simple practical examples that can be tried during the tutorial.

Bio:

Lukáš Marek is a PhD student under the supervision of Prof. Petr Tůma at Charles University, Czech Republic. His research topics include dynamic program analysis and performance evaluation.

Holistic Optimization of Distribution Automation Smart-Grid Designs using Survivability Modeling

Time: April 21, 13:30 - 15:00

Short description:

Smart grids are fostering a paradigm shift in the realm of power distribution systems. Whereas traditionally different components of the power distribution system have been provided and analyzed by different teams, smart grids require a unified and holistic approach taking into consideration the interplay of distributed generation, distribution automation topology, intelligent features, and others. In this tutorial, we describe the use of transient survivability metrics to create better distribution automation network designs. Our approach combines survivability analysis and power flow analysis to assess the survivability of the distribution power grid network. We conclude the tutorial by presenting practical optimization results based on real distribution grids.

Bio:

Alberto Avritzer received a Ph.D. in Computer Science from the University of California, Los Angeles, an M.Sc. in Computer Science for the Federal University of Minas Gerais, Brazil, and the B.Sc. in Computer Engineering from the Technion, Israel Institute of Technology. He is currently a Senior Member of the Technical Staff in the Software Engineering Department at Siemens Corporate Research, Princeton, New Jersey.  Before moving to Siemens Corporate Research, he spent 13 years at AT&T Bell Laboratories, where he developed tools and techniques for performance testing and analysis. He spent the summer of 1987 at IBM Research, at Yorktown Heights.  His research interests are in software engineering, particularly software testing, monitoring and rejuvenation of smoothly degrading systems, and metrics to assess software architecture, and he has published over 50 papers in journals and refereed conference proceedings in those areas. Alberto is co-editor of the Springer book Resilience Assessment and Evaluation of Computing Systems and is a Senior Member of ACM. Dr. Avritzer can be reached at alberto.avritzer@siemens.com.

Anne Koziolek (Martens) is an assistant professor at the Karlsruhe Institute of Technology (KIT), Germany. Before, she was a postdoc researcher at the University of Zurich, Switzerland. She received her PhD degree from KIT in 2011 and her Diploma degree from University of Oldenburg, Germany, in 2007. Her research interests include the iterative handling of software architecture and quality requirements as well as software architecture evaluation and assessment. In particular, she is interested in automated design space exploration and optimization of software architectures with respect to quantifiable quality attributes such as performance, reliability, survivability, and costs.

Daniel Sadoc Menasché received his MSc and PhD degrees in Computer Science from the University of Massachusetts, Amherst, in 2011. He received a BS cum laude degree in Computer Science and MSc degree in Computer and Systems Engineering from the Federal University of Rio de Janeiro (UFRJ) Brazil, in 2002 and 2005, respectively. He did internships at INRIA Sophia Antipolis, University of Avignon and Technicolor. In 2011, he joined the Department of Computer Science at the Federal University of Rio de Janeiro (UFRJ), Brazil. His main interests are in the modeling and analysis of computer systems, including computer networks, and smart grids.

Analysis of concurrent models with non-Markovian temporal parameters

Times:

Part 1: April 21, 13:30 - 15:00

Part 2: April 21, 15:30 - 17:00

Short description:

The tutorial addresses the analysis of stochastic models that include non-exponentially distributed temporal parameters.

It will first characterize different classes of stochastic processes (CTMC, SMP, MRP, GSMP) that may underly a high level model depending on the distribution of timed events and their concurrency, surveying consolidated solution techniques and pointing out their respective limitations.

It will then describe the method of Stochastic State Classes, which integrates symbolic analysis of feasible timings based on DBM zones with symbolic derivation of multivariate probability distributions and with the analytic technique of Markov renewal theory.

In so doing, the tutorial will show how the method of stochastic state classes provides both a general approach for the stochastic characterization of the process that underlies a model and a concrete means for the analytic evaluation of steady state and transient behavior of models with complex timings and concurrency.

The new release of the Oris tool will be demonstrated, addressing functional capabilities for modeling and evaluation as well as structural characteristics that make it open to reuse for a variety of modeling formalisms and solution techniques.

Bio:

Enrico Vicario is a full professor of Computer Science at the department of Information Engineering of the University of Florence. He was born in 1965 and received the master degree in Electronics Engineering (cum laude) and the Ph.D. in Informatics and Telecommunications Engineering in 1990 and 1994, respectively.

His research is mainly focused on the analysis of timed models for quantitative evaluation, correctness verification, and testing of concurrent real-time systems. He also carries out a relevant experimentation activity in the area of software architectures and development methods.

Use Case-Driven Performance Engineering without “Concurrent Users”

Time: April 21, 15:30 -17:00

Short description:

The tutorial is about how to write precise performance requirements and use them when building an automated performance test. More precisely, there are the following five contributions in the tutorial.

1) The concept of concurrent users often causes confusion when used to define performance requirements in industrial software projects. The term is frequently used to state a performance requirement without clarification of what the users will be doing, or how often. This paper offers a thorough analysis of the concept and related notions.

2) Despite the confusion surrounding it, the concept of concurrent users – in a precise form – is advocated in the community for stating performance requirements. However, we argue in this paper that, even when stated in precise terms, this approach has drawbacks. Indeed, a system may perform better than expected, even if the number of concurrent users it can handle is worse than expected. A better suited notion is that of throughput.

3) But even when basing performance requirements on clear, well-suited concepts, there appears to be no uniform format in the literature for such requirements. In particular, the requirements are sometimes stated in general, rather than for the specific areas of functionality of the system. As a consequence, the point may be missed that the through-put may be unevenly distributed over the functionality of the system. In this paper we therefore advocate the format of performance-annotated use cases, adding requirements on through-put and response-times to the traditional use case.

4) It is well-known how functional test cases are developed from use cases. In contrast, less has been said about the generation of performance test cases. Therefore, we show how the enriched use cases not only provide precise and meaningful requirements, but also yield detailed specification of the performance test set-up which can be directly input as configuration of load test clients. As a bonus, initial configuration of the system’s capacity for handling concurrent users and requests is also provided.

5) Finally we outline an overall approach to performance test based on the above ideas. The approach has been followed in several industrial projects.

Bio:

Morten Heine Sørensen is a former assistant professor from the Department of Computer Science, University of Copenhagen, where he specialized in various topics, including optimization of programs. He has published some 50 papers and a book on types in programming languages, from a mathematical perspective. He has worked as a consultant in the software industry for more than ten years, specializing in performance testing, and participating in a number of mission-critical projects. He also holds what appears to be the only industrial course on performance testing in Scandinavia (not counting training in specific tools).