A Symbolic Approach to the Analysis of Multi-Formalism Markov Reward Models
2013 (English)In: Theory and Application of Multi-Formalism Modeling, Pennsylvania: IGI Global, 2013, 1Chapter in book (Refereed)
With complex systems and complex requirements being a challenge that designers must face to reach quality results, multi-formalism modeling offers tools and methods that allow modelers to exploit the benefits of different techniques in a general framework intended to address these challenges.
Theory and Application of Multi-Formalism Modeling boldly explores the importance of this topic by gathering experiences, theories, applications, and solutions from diverse perspectives of those involved with multi-formalism modeling. Professionals, researchers, academics, and students in this field will be able to critically evaluate the latest developments and future directions of multi-formalism research.
Abstract of book chapter:
When modelling large systems, modularity is an important concept, as it aids modellers to master the complexity of their model. Moreover, employing different modelling formalisms within the same modelling project has the potential to ease the description of various parts or aspects of the overall system. In the area of performability modelling, formalisms such as stochastic reward nets, stochastic process algebras, stochastic automata or stochastic UML state charts are often used, and several of these may be employed within one modelling project. This paper presents an approach for efficiently constructing a symbolic representation in the form of a Zero-suppressed Binary Decision Diagram (BDD) which represents the Markov Reward Model underlying a multi-formalism high-level model. In this approach, the interaction between the submodels may be established either by the sharing of state variables or by the synchronisation of common activities. It is shown that the Decision Diagram data structure and the associated algorithms enable highly efficient state space generation and different forms of analysis of the underlying Markov Reward Model, e.g. calculation of reward measures or asserting non-functional system properties by means of model checking techniques.
Place, publisher, year, edition, pages
Pennsylvania: IGI Global, 2013, 1.
Large Scale Population Models, Modeling, Composability, Modeling Techniques, Multi-formalism, Theoretical Foundations, Petri-Net, Models and Tools, Workflow Languages
Research subject Computer Systems Sciences
IdentifiersURN: urn:nbn:se:uu:diva-205537ISBN: 1466646594OAI: oai:DiVA.org:uu-205537DiVA: diva2:641844