uu.seUppsala University Publications
Change search
Refine search result
123 1 - 50 of 126
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Amnell, Tobias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Code synthesis for timed automata2003Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    In this thesis, we study executable behaviours of timed models. The focus is on synthesis of executable code with predictable behaviours from high level abstract models. We assume that a timed system consists of two parts: the control software and the plant (i.e. the environment to be controlled). Both are modelled as timed automata extended with real time tasks. We consider the extended timed automata as design models.

    We present a compilation procedure to transform design models to executable code including a run-time scheduler (run time system) preserving the correctness and schedulability of the models. The compilation procedure has been implemented in a prototype C-code generator for the brickOS operating system included in the Times tool. We also present an animator, based on hybrid automata, to be used to describe a simulated environment (i.e. the plant) for timed systems. The tasks of the hybrid automata define differential equations and the animator uses a differential equations solver to calculate step-wise approximations of real valued variables. The animated objects, described as hybrid automata, are compiled by the Times tool into executable code using a similar procedure as for controller software.

    To demonstrate the applicability of timed automata with tasks as a design language we have developed the control software for a production cell. The production cell is built in LEGO and is controlled by a Hitachi H8 based LEGO-Mindstorms control brick. The control software has been analysed (using the Times tool) for schedulability and other safety properties. Using results from the analysis we were able to avoid generating code for parts of the design that could never be reached, and could also limit the amount of memory allocated for the task queue.

  • 2.
    Backeman, Peter
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    New techniques for handling quantifiers in Boolean and first-order logic2016Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The automation of reasoning has been an aim of research for a long time. Already in 17th century, the famous mathematician Leibniz invented a mechanical calculator capable of performing all four basic arithmetic operators. Although automatic reasoning can be done in different fields, many of the procedures for automated reasoning handles formulas of first-order logic. Examples of use cases includes hardware verification, program analysis and knowledge representation.

    One of the fundamental challenges in first-order logic is handling quantifiers and the equality predicate. On the one hand, SMT-solvers (Satisfiability Modulo Theories) are quite efficient at dealing with theory reasoning, on the other hand they have limited support for complete and efficient reasoning with quantifiers. Sequent, tableau and resolution calculi are methods which are used to construct proofs for first-order formulas and can use more efficient techniques to handle quantifiers. Unfortunately, in contrast to SMT, handling theories is more difficult.

    In this thesis we investigate methods to handle quantifiers by restricting search spaces to finite domains which can be explored in a systematic manner. We present this approach in two different contexts.

    First we introduce a function synthesis based on template-based quantifier elimination, which is applied to gene interaction computation. The function synthesis is shown to be capable of generating smaller representations of solutions than previous solvers, and by restricting the constructed functions to certain forms we can produce formulas which can more easily be interpreted by a biologist.

    Secondly we introduce the concept of Bounded Rigid E-Unification (BREU), a finite form of unification that can be used to define a complete and sound sequent calculus for first-order logic with equality. We show how to solve this bounded form of unification in an efficient manner, yielding a first-order theorem prover utilizing BREU that is competitive with other state-of-the-art tableau theorem provers.

    List of papers
    1. Theorem proving with bounded rigid E-unification
    Open this publication in new window or tab >>Theorem proving with bounded rigid E-unification
    2015 (English)In: Automated Deduction – CADE-25, Springer, 2015, p. 572-587Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Springer, 2015
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 9195
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-268734 (URN)10.1007/978-3-319-21401-6_39 (DOI)000363947500039 ()978-3-319-21400-9 (ISBN)
    Conference
    25th International Conference on Automated Deduction; August 1–7, 2015, Berlin, Germany
    Funder
    Swedish Research Council, 2014-5484
    Available from: 2015-07-25 Created: 2015-12-09 Last updated: 2018-01-10Bibliographically approved
    2. Efficient algorithms for bounded rigid E-unification
    Open this publication in new window or tab >>Efficient algorithms for bounded rigid E-unification
    2015 (English)In: Automated Reasoning with Analytic Tableaux and Related Methods, Springer, 2015, p. 70-85Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Springer, 2015
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 9323
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-268735 (URN)10.1007/978-3-319-24312-2_6 (DOI)000366125200009 ()978-3-319-24311-5 (ISBN)
    Conference
    TABLEAUX 2015, September 21–24, Wroclaw, Poland
    Funder
    Swedish Research Council, 2014-5484
    Available from: 2015-11-08 Created: 2015-12-09 Last updated: 2018-01-10Bibliographically approved
    3. Algebraic polynomial-based synthesis for abstract Boolean network analysis
    Open this publication in new window or tab >>Algebraic polynomial-based synthesis for abstract Boolean network analysis
    2016 (English)In: Satisfiability Modulo Theories: SMT 2016, RWTH Aachen University , 2016, p. 41-50Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    RWTH Aachen University, 2016
    Series
    CEUR Workshop Proceedings, ISSN 1613-0073 ; 1617
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-311587 (URN)
    Conference
    14th International Workshop on Satisfiability Modulo Theories
    Available from: 2016-07-02 Created: 2016-12-29 Last updated: 2018-01-13Bibliographically approved
  • 3.
    Ben Henda, Noomene
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Infinite-state Stochastic and Parameterized Systems2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    A major current challenge consists in extending formal methods in order to handle infinite-state systems. Infiniteness stems from the fact that the system operates on unbounded data structure such as stacks, queues, clocks, integers; as well as parameterization.

    Systems with unbounded data structure are natural models for reasoning about communication protocols, concurrent programs, real-time systems, etc. While parameterized systems are more suitable if the system consists of an arbitrary number of identical processes which is the case for cache coherence protocols, distributed algorithms and so forth.

    In this thesis, we consider model checking problems for certain fundamental classes of probabilistic infinite-state systems, as well as the verification of safety properties in parameterized systems. First, we consider probabilistic systems with unbounded data structures. In particular, we study probabilistic extensions of Lossy Channel Systems (PLCS), Vector addition Systems with States (PVASS) and Noisy Turing Machine (PNTM). We show how we can describe the semantics of such models by infinite-state Markov chains; and then define certain abstract properties, which allow model checking several qualitative and quantitative problems.

    Then, we consider parameterized systems and provide a method which allows checking safety for several classes that differ in the topologies (linear or tree) and the semantics (atomic or non-atomic). The method is based on deriving an over-approximation which allows the use of a symbolic backward reachability scheme. For each class, the over-approximation we define guarantees monotonicity of the induced approximate transition system with respect to an appropriate order. This property is convenient in the sense that it preserves upward closedness when computing sets of predecessors.

  • 4.
    Bengtson, Jesper
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Formalising process calculi2010Doctoral thesis, monograph (Other academic)
    Abstract [en]

    As the complexity of programs increase, so does the complexity of the models required to reason about them. Process calculi were introduced in the early 1980s and have since then been used to model communication protocols of varying size and scope. Whereas modeling sophisticated  protocols in simple process algebras like CCS or the pi-calculus is doable, expressing the models required is often gruesome and error prone. To combat this, more advanced process calculi were introduced, which significantly reduce the complexity of the models. However, this simplicity comes at a price -- the theories of the calculi themselves instead become gruesome and error prone, establishing their mathematical and logical properties has turned out to be difficult. Many of the proposed calculi have later turned out to be inconsistent.

    The contribution of this thesis is twofold. First we provide methodologies to formalise the meta-theory of process calculi in an interactive theorem prover. These are used to formalise significant parts of the meta-theory of CCS and the pi-calculus in the theorem prover Isabelle, using Nominal Logic to allow for a smooth treatment of the binders. Second we introduce and formalise psi-calculi, a framework for process calculi incorporating several existing ones, including those we already formalised, and which is significantly simpler and substantially more expressive. Our methods scale well as complexity of the calculi increases.

    The formalised results for all calculi include congruence results for both strong and weak bisimilarities, in the case of the pi-calculus for both the early and the late operational semantics. For the finite pi-calculus, we also formalise the proof that the axiomatisation of strong late bisimilarity is sound and complete. We believe psi-calculi to be the state of the art of process calculi, and our formalisation to be the most extensive formalisation of process calculi ever done inside a theorem prover.

  • 5.
    Bengtsson, Johan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Clocks, DBMs and States in Timed Systems2002Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Today, computers are used to control various technical systems in our society. In many cases, time plays a crucial role in the operation of computers embedded in such systems. This thesis is about techniques and tools for the analysis of timing behaviours of computer systems. Its main contributions are in the development and implementation of UPPAAL, a tool designed to automate the analysis process of systems modelled as timed automata.

    As the first contribution, we present a software package for timing constraints represented as Difference Bound Matrices. We describe in details, all data-structures and operations for DBMs needed in state-space exploration of timed automata, as well as techniques for efficient implementation. In particular, we have developed two normalisation algorithms to guarantee termination of reachability analysis for timed automata containing constraints on clock differences, that transform DBMs according to not only maximal constants of clocks as in algorithms published in the literature, but also difference constraints appearing in the automata. The second contribution of this thesis is a collection of low level optimisations on the internal data-structures and algorithms of UPPAAL to minimise memory and time consumption. We present compression techniques to allow the state-space of a system to be efficiently stored and manipulated in main memory. We also study super-trace and hash-compaction methods for timed automata to deal with system-models for which the size of available memory is too small to store the explored state-space. Our experiments show that these techniques have greatly improved the performance of UPPAAL. The third contribution is in partial-order reduction techniques for timed-systems. A major problem in automatic verification is the large number of redundant states and transitions introduced by modelling concurrent events as interleaved transitions. We propose a notion of committed locations for timed automata. Committed locations are annotations that can be used for not only modelling of intermediate states within atomic transitions, but also guiding the model checker to ignore unnecessary interleavings in state-space exploration. The notion of committed locations has been generalised to give a local-time semantics for networks of timed automata, which allows for the application of existing partial order reduction techniques to timed systems.

  • 6.
    Bengtsson, Johan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Efficient symbolic state exploration of timed systems: Theory and implementation2001Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Timing aspects are important for the correctness of safety-critical systems. It is crucial that these aspects are carefully analysed in designing such systems. UPPAAL is a tool designed to automate the analysis process. In UPPAAL, a system under construction is described as a network of timed automata and the desired properties of the system can be specified using a query language. Then UPPAAL can be used to explore the state space of the system description to search for states violating (or satisfying) the properties. If such states are found, the tool provides diagnostic information, in form of executions leading to the states, to help the desginers, for example, to locate bugs in the design.

    The major problem for UPPAAL and other tools for timed systems to deal with industrial-size applications is the state space explosion. This thesis studies the sources of the problem and develops techniques for real-time model checkers, such as UPPAAL, to attack the problem. As contributions, we have developed the notion of committed locations to model atomicity and local-time semantics for timed systems to allow partial order reductions, and a number of implementation techniques to reduce time and space consumption in state space exploration. The techniques are studied and compared by case studies. Our experiments demonstrate significant improvements on the performance of UPPAAL.

  • 7.
    Berg, Erik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Efficient and Flexible Characterization of Data Locality through Native Execution Sampling2005Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Data locality is central to modern computer designs. The widening gap between processor speed and memory latency has introduced the need for a deep hierarchy of caches. Thus, the performance of an application is to a large extent dependent on the amount of data locality the caches can exploit. Some data locality comes naturally from the way most programs are written and the way their data is allocated in the memory. Compilers further try to create data locality by loop transformations and optimized data layout. Different ways of writing a program and/or laying out its data may improve an application’s locality even more. However, it is far from obvious how such a locality optimization can be achieved, especially since the optimizing compiler may have left the optimization job half done. Thus, efficient tools are needed to guide the software developers on their quest for data locality.

    The main contribution of this dissertation is a sample-based novel method for analyzing the data locality of an application. Very sparse data is collected during a single execution of the studied application. The sparse sampling adds a minimum overhead to the execution time, which enables complex applications running realistic data sets to be studied. The architecturalindependent information collected during the execution is fed to a mathematical cache model for predicting the cache miss ratio. The sparsely-collected data can be used to characterize the application’s data locality in respect to almost any possible cache hierarchy, such as complicated multiprocessor memory systems with multilevel cache hierarchies. Any combination of cache size, cache line size and degree of sharing can be modeled. Each new modeled design point takes only a fraction of a second to evaluate, even though the application from which the sampled data was collected may have executed for hours. This makes the tool not just usable for software developers, but also for hardware developers who need to evaluate a huge memory-system design space.

    We also discuss different ways of presenting data-locality information to a programmer in an intuitive and easily interpreted way. Some of the locality metrics we introduce utilize the flexibility of our algorithm and its ability to vary different cache parameters for one run. The dissertation also presents several prototype implementations of tools for profiling the memory system.

    List of papers
    1. SIP: Performance Tuning through Source Code Interdependence
    Open this publication in new window or tab >>SIP: Performance Tuning through Source Code Interdependence
    2002 In: Proceedings of the 8th International Euro-Par ConferenceArticle in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-93582 (URN)
    Available from: 2005-10-19 Created: 2005-10-19Bibliographically approved
    2. StatCache: A Probabilistic Approach to Efficient and Accurate Data Locality Analysis
    Open this publication in new window or tab >>StatCache: A Probabilistic Approach to Efficient and Accurate Data Locality Analysis
    2004 In: Proceedings of the 2004 IEEE International Symposium on Performance Analysis of Systems and SoftwareArticle in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-93583 (URN)
    Available from: 2005-10-19 Created: 2005-10-19Bibliographically approved
    3. Fast Data-Locality Profiling of Native Execution
    Open this publication in new window or tab >>Fast Data-Locality Profiling of Native Execution
    2005 (English)In: ACM SIGMETRICS Performance Evaluation Review, ISSN 0163-5999, Vol. 33, no 1, p. 169-180Article in journal (Refereed) Published
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-93584 (URN)10.1145/1071690.1064232 (DOI)
    Available from: 2005-10-19 Created: 2005-10-19 Last updated: 2018-01-13Bibliographically approved
    4. A Statistical Multiprocessor Cache Model
    Open this publication in new window or tab >>A Statistical Multiprocessor Cache Model
    2006 (English)In: Proc. International Symposium on Performance Analysis of Systems and Software: ISPASS 2006, Piscataway, NJ: IEEE , 2006, p. 89-99Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Piscataway, NJ: IEEE, 2006
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-93585 (URN)10.1109/ISPASS.2006.1620793 (DOI)1-4244-0186-0 (ISBN)
    Available from: 2005-10-19 Created: 2005-10-19 Last updated: 2018-01-13Bibliographically approved
  • 8.
    Berg, Erik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Methods for run time analysis of data locality2003Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The growing gap between processor clock speed and DRAM access time puts new demands on software and development tools. Deep memory hierarchies and high cache miss penalties in present and emerging computer systems make execution time sensitive to data locality. Therefore, developers of performance-critical applications and optimizing compilers must be aware of data locality and maximize cache utilization to produce fast code. To aid the optimization process and help understanding data locality, we need methods to analyze programs and pinpoint poor cache utilization and possible optimization opportunities.

    Current methods for run-time analysis of data locality and cache behavior include functional cache simulation, often combined with set sampling or time sampling, other regularity metrics based on strides and data streams, and hardware monitoring. However, they all share the trade-off between run-time overhead, accuracy and explanatory power.

    This thesis presents methods to efficiently analyze data locality at run time based on cache modeling. It suggests source-interdependence profiling as a technique for examining the cache behavior of applications and locating source code statements and/or data structures that cause poor cache utilization. The thesis also introduces a novel statistical cache-modeling technique, StatCache. Rather than implementing a functional cache simulator, StatCache estimates the miss ratios of fully-associative caches using probability theory. A major advantage of the method is that the miss ratio estimates can be based on very sparse sampling. Further, a single run of an application is enough to estimate the miss ratio of caches of arbitrary sizes and line sizes and to study both spatial and temporal data locality.

  • 9.
    Berg, Therese
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Regular inference for reactive systems2006Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Models of reactive systems play a central role in many techniques for verification and analysis of reactive systems. Both a specification of the system and the abstract behavior of the system can be expressed in a formal model. Compliance with the functional parts in the specification can be controlled in different ways. Model checking techniques can be applied to a model of the system or directly to source code. In testing, model-based techniques can generate test suites from specification. A bottleneck in model-based techniques is however to construct a model of the system. This work concerns a technique that automatically constructs a model of a system without access to specification, code or internal structure. We assume that responses of the system to sequences of input can be observed. In this setting, so called regular inference techniques build a model of the system based on system responses to selected input sequences.

    There are three main contributions in this thesis. The first is a survey on the most well-known techniques for regular inference. The second is an analysis of Angluin's algorithm for regular inference on synthesized examples. On a particular type of examples, with prefix-closed languages, typically used to model reactive systems, the required number of input sequences grow approximately quadratically in the number of transitions of the system. However, using an optimization for systems with prefix-closed languages we were able to reduce the number of required input sequences with about 20%. The third contribution is a developed regular inference technique for systems with parameters. This technique aims to better handle entities of communications protocols where messages normally have many parameters of which few determine the subsequent behavior of the system. Experiments with our implementation of the technique confirm a reduction of the required number of input sequences, in comparison with Angluin's algorithm.

  • 10.
    Berglund, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Learning computer systems in a distributed project course: The what, why, how and where2005Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Senior university students taking an internationally distributed project course in computer systems find themselves in a complex learning situation. To understand how they experience computer systems and act in their learning situation, the what, the why, the how and the where of their learning have been studied from the students’ perspective. The what aspect concerns the students’ understanding of concepts within computer systems: network protocols. The why aspect concerns the students’ objectives to learn computer systems. The how aspect concerns how the students go about learning. The where aspect concerns the students’ experience of their learning environment. These metaphorical entities are then synthesised to form a whole.

    The emphasis on the students’ experience of their learning motivates a phenomenographic research approach as the core of a study that is extended with elements of activity theory. The methodological framework that is developed from these research approaches enables the researcher to retain focus on learning, and specifically the learning of computer systems, throughout.

    By applying the framework, the complexity in the learning is unpacked and conclusions are drawn on the students’ learning of computer systems. The results are structural, qualitative, and empirically derived from interview data. They depict the students’ experience of their learning of computer systems in their experienced learning situation and highlight factors that facilitate learning.

    The results comprise sets of qualitatively different categories that describe how the students relate to their learning in their experienced learning environment. The sets of categories, grouped after the four components (what, why, how and where), are synthesised to describe the whole of the students’ experience of learning computer systems.

    This study advances the discussion about learning computer systems and demonstrates how theoretically anchored research contributes to teaching and learning in the field. Its multi-faceted, multi-disciplinary character invites further debate, and thus, advances the field.

  • 11.
    Berglund, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    On the understanding of computer network protocols2002Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    How students learn about network protocols is studied in a project-centred, internationally distributed, university course in computer systems taught jointly by two universities. Insights into students' understanding of basic concepts within computer networks are gained through an empirical phenomenographic research approach.

    The use of phenomenography as a research approach makes it possible to learn about computer science, as it is experienced by the students. The context in which the research is carried out and issues concerning by whom the context is experienced, are investigated and form a part of the methodological basis.

    Students' understanding of some protocols that are used within the project, as well as their experience of the general concept of network protocols are investigated, and different ways of experiencing the protocols are discerned. Some aspects that indicate good learning outcomes are identified, such as being capable of understanding a protocol in different ways and of making relevant choices between the ways it could be experienced according to the context in which it appears.

    Based on these results a discussion on learning and teaching is developed. It is argued that a variation in the context in which the protocol is experienced promotes good learning, since different ways of experiencing a protocol are useful with different tasks to hand. A student with a good understanding of network protocols can choose in a situationally relevant way between different ways of experiencing a protocol.

    List of papers
    1. How do students understand network protocols?: A phenomenographic study
    Open this publication in new window or tab >>How do students understand network protocols?: A phenomenographic study
    2002 (English)Report (Other academic)
    Series
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2002-006
    National Category
    Computer Sciences Educational Sciences
    Identifiers
    urn:nbn:se:uu:diva-44859 (URN)
    Available from: 2008-11-26 Created: 2008-11-26 Last updated: 2018-01-11
    2. On context in phenomenographic research on understanding heat and temperate
    Open this publication in new window or tab >>On context in phenomenographic research on understanding heat and temperate
    2002 (English)In: EARLI, Bi-annual Symposium, Fribourg, Switzerland, 2002Conference paper, Published paper (Refereed)
    Abstract [en]

    Starting from an empirical study of lay adults' understanding of heatand temperature, we distinguish between different meanings of "context" inphenomenographic research. To confuse the variation in ways of experiencingthe context(s) of the study with the variation in ways of experiencing thephenomenon of study is to risk losing fundamental insights. We discuss contextas experienced and as interwoven with the experience of the phenomenon, andanalyse its significance in two dimensions: (1) the stage of the research project:formulating the question, collecting data, analysing data and deploying results;and (2) "who is experiencing" the context: the individual, the collective, or theresearcher. The arguments are illustrated from the empirical study.

    National Category
    Computer Sciences Educational Sciences
    Identifiers
    urn:nbn:se:uu:diva-18488 (URN)
    Conference
    EARLI, Bi-annual Symposium, Fribourg, Switzerland
    Available from: 2008-11-26 Created: 2008-11-26 Last updated: 2018-01-12
  • 12.
    Bjurefors, Fredrik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Measurements in opportunistic networks2012Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Opportunistic networks are a subset of delay tolerant networks where the contacts are unscheduled. Such networks can be formed ad hoc by wireless devices, such as mobile phones and laptops. In this work we use a data-centric architecture for opportunistic networks to evaluate data dissemination overhead, congestion in nodes' buffer, and the impact of transfer ordering. Dissemination brings an overhead since data is replicated to be spread in the network and overhead leads to congestion, i.e., overloaded buffers.

    We develop and implement an emulation testbed to experimentally evaluate properties of opportunistic networks. We evaluate the repeatability of experiments in the emulated testbed that is based on virtual computers. We show that the timing variations are on the order of milliseconds.

    The testbed was used to investigate overhead in data dissemination, congestion avoidance, and transfer ordering in opportunistic networks. We show that the overhead can be reduced by informing other nodes in the network about what data a node is carrying. Congestion avoidance was evaluated in terms of buffer management, since that is the available tool in an opportunistic network, to handle congestion. It was shown that replication information of data objects in the buffer yields the best results. We show that in a data-centric architecture were each data item is valued differently, transfer ordering is important to achieve delivery of the most valued data.

  • 13.
    Bjurefors, Fredrik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Opportunistic Networking: Congestion, Transfer Ordering and Resilience2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Opportunistic networks are constructed by devices carried by people and vehicles. The devices use short range radio to communicate. Since the network is mobile and often sparse in terms of node contacts, nodes store messages in their buffers, carrying them, and forwarding them upon node encounters. This form of communication leads to a set of challenging issues that we investigate: congestion, transfer ordering, and resilience.

    Congestion occurs in opportunistic networks when a node's buffers becomes full. To be able to receive new messages, old messages have to be evicted. We show that buffer eviction strategies based on replication statistics perform better than strategies that evict messages based on the content of the message.

    We show that transfer ordering has a significant impact on the dissemination of messages during time limited contacts. We find that transfer strategies satisfying global requests yield a higher delivery ratio but a longer delay for the most requested data compared to satisfying the neighboring node's requests.

    Finally, we assess the resilience of opportunistic networks by simulating different types of attacks. Instead of enumerating all possible attack combinations, which would lead to exhaustive evaluations, we introduce a method that use heuristics to approximate the extreme outcomes an attack can have. The method yields a lower and upper bound for the evaluated metric over the different realizations of the attack. We show that some types of attacks are harder to predict the outcome of and other attacks may vary in the impact of the attack due to the properties of the attack, the forwarding protocol, and the mobility pattern.

    List of papers
    1. Haggle Testbed: a Testbed for Opportunistic Networks
    Open this publication in new window or tab >>Haggle Testbed: a Testbed for Opportunistic Networks
    2011 (English)In: In Proceedings of the 7th Swedish National Computer Networking Workshop, 2011Conference paper, Published paper (Refereed)
    Identifiers
    urn:nbn:se:uu:diva-155530 (URN)
    Projects
    Haggle
    Available from: 2011-06-23 Created: 2011-06-23 Last updated: 2014-06-30
    2. Congestion Avoidance in a Data-Centric Opportunistic Network
    Open this publication in new window or tab >>Congestion Avoidance in a Data-Centric Opportunistic Network
    2011 (English)In: Proceedings of the 2011 ACM SIGCOMM Workshop on Information-Centric Networking (ICN-2011), 2011Conference paper, Published paper (Refereed)
    Identifiers
    urn:nbn:se:uu:diva-155528 (URN)
    Projects
    ResumeNet
    Available from: 2011-06-23 Created: 2011-06-23 Last updated: 2014-06-30
    3. Making the Most of Your Contacts: Transfer Ordering in Data-Centric Opportunistic Networks
    Open this publication in new window or tab >>Making the Most of Your Contacts: Transfer Ordering in Data-Centric Opportunistic Networks
    Show others...
    2012 (English)In: Proceedings of the 2012 ACM MobiOpp Workshop on Mobile Opportunistic Networks, Zürich: ACM Press, 2012Conference paper, Published paper (Refereed)
    Abstract [en]

    Opportunistic networks use unpredictable and time-limited con- tacts to disseminate data. Therefore, it is important that protocols transfer useful data when contacts do occur. Specifically, in a data- centric network, nodes benefit from receiving data relevant to their interests. To this end, we study five strategies to select and order the data to be exchanged during a limited contact, and measure their ability to promptly and efficiently deliver highly relevant data.

    Our trace-driven experiments on an emulation testbed suggest that nodes benefit in the short-term from ordering data transfers to satisfy local interests. However, this can lead to suboptimal longterm system performance. Restricting sharing based on matching nodes’ interests can lead to segregation of the network, and limit useful dissemination of data. A non-local understanding of other nodes’ interests is necessary to effectively move data across the network. If ordering of transfers for data relevance is not explicitly considered performance is comparable to random, which limits the delivery of individually relevant data. 

    Place, publisher, year, edition, pages
    Zürich: ACM Press, 2012
    National Category
    Communication Systems
    Identifiers
    urn:nbn:se:uu:diva-171587 (URN)
    Conference
    ACM MobiOpp
    Projects
    ResumeNet
    Available from: 2012-03-22 Created: 2012-03-22 Last updated: 2014-06-30
    4. Resilience and Opportunistic Forwarding: Beyond Average Value Analysis
    Open this publication in new window or tab >>Resilience and Opportunistic Forwarding: Beyond Average Value Analysis
    Show others...
    2014 (English)In: Computer Communications, ISSN 0140-3664, E-ISSN 1873-703X, Vol. 48, no SI, p. 111-120Article in journal (Refereed) Published
    Abstract [en]

    Opportunistic networks are systems with highly distributed operation, relying on the altruistic cooperation of highly heterogeneous, and not always software and hardware-compatible, user nodes. Moreover, the absence of central coordination and control makes them vulnerable to malicious attacks. In this paper, we study the resilience of popular forwarding protocols to a representative set of challenges to their normal operation. These include jamming locally disturbing message transfer between nodes, hardware/software failures and incompatibility among nodes rendering contact opportunities useless, and free-riding phenomena. We first formulate and promote the metric envelope concept as a tool for assessing the resilience of opportunistic forwarding schemes. Metric envelopes depart from the standard practice of average value analysis and explicitly account for the differentiated challenge impact due to node heterogeneity (device capabilities, mobility) and attackers’ intelligence. We then propose heuristics to generate worst- and best-case challenge realization scenarios and approximate the lower and upper bounds of the metric envelopes. Finally, we demonstrate the methodology in assessing the resilience of three popular forwarding protocols in the presence of the three challenges, and under a comprehensive range of mobility patterns. The metric envelope approach provides better insights into the level of protection path diversity and message replication provide against different challenges, and enables more informed choices in opportunistic forwarding when network resilience becomes important.

    National Category
    Communication Systems
    Identifiers
    urn:nbn:se:uu:diva-222822 (URN)10.1016/j.comcom.2014.04.004 (DOI)000337883200010 ()
    Projects
    ResumeNet, WISENET
    Funder
    EU, FP7, Seventh Framework Programme, FP7-224619
    Note

    Special Issue

    Available from: 2014-04-17 Created: 2014-04-14 Last updated: 2017-12-05Bibliographically approved
  • 14.
    Blom, Johan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Model-Based Protocol Testing in an Erlang Environment2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Testing is the dominant technique for quality assurance of software systems. It typically consumes considerable resources in development projects, and is often performed in an ad hoc manner. This thesis is concerned with model-based testing, which is an approach to make testing more systematic and more automated. The general idea in model-based testing is to start from a formal model, which captures the intended behavior of the software system to be tested. On the basis of this model, test cases can be generated in a systematic way. Since the model is formal, the generation of test suites can be automated and with adequate tool support one can automatically quantify to which degree they exercise the tested software.

    Despite the significant improvements on model-based testing in the last 20 years, acceptance by industry has so far been limited. A number of commercially available tools exist, but still most testing in industry relies on manually constructed test cases.

    This thesis address this problem by presenting a methodology and associated tool support, which is intended to be used for model-based testing of communication protocol implementations in industry. A major goal was to make the developed tool suitable for industrial usage, implying that we had to consider several problems that typically are not addressed by the literature on model-based testing. The thesis presents several technical contributions to the area of model-based testing, including

    - a new specification language based on the functional programming language Erlang,

    - a novel technique for specifying coverage criteria for test suite generation, and

    - a technique for automatically generating test suites.

    Based on these developments, we have implemented a complete tool chain that generates and executes complete test suites, given a model in our specification language. The thesis also presents a substantial industrial case study, where our technical contributions and the implemented tool chain are evaluated. Findings from the case study include that test suites generated using (model) coverage criteria have at least as good fault-detection capability as equally large random test suites, and that model-based testing could discover faults in previously well-tested software where previous testing had employed a relaxed validation of requirements.

  • 15.
    Bohlin, Therese
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Regular Inference for Communication Protocol Entities2009Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    A way to create well-functioning computer systems is to automate error detection in the systems. Automated techniques for finding errors, such as testing and formal verification, requires a model of the system. The technique for constructing deterministic finite automata (DFA) models, without access to the source code, is called regular inference. The technique provides sequences of input, so called membership queries, to a system, observes the responses, and infers a model from the input and responses.

    This thesis presents work to adapt regular inference to a certain kind of systems: communication protocol entities. Such entities interact by sending and receiving messages consisting of a message type and a number of parameters, each of which potentially can take on a large number of values. This may cause a model of a communication protocol entity inferred by regular inference, to be very large and take a long time to infer. Since regular inference creates a model from the observed behavior of a communication protocol entity, the model may be very different from a designer's model of the system's source code.

    This thesis presents adaptations of regular inference to infer more compact models and use less membership queries. The first contribution is a survey over three algorithms for regular inference. We present their similarities and their differences in terms of the required number of membership queries. The second contribution is an investigation on how many membership queries a common regular inference algorithm, the L* algorithm by Angluin, requires for randomly generated DFAs and randomly generated DFAs with a structure common for communication protocol entities. In comparison, the DFAs with a structure common for communication protocol entities require more membership queries. The third contribution is an adaptation of regular inference to communication protocol entities which behavior foremost are affected by the message types. The adapted algorithm avoids asking membership queries containing messages with parameter values that results in already observed responses. The fourth contribution is an approach for regular inference of communication protocol entities which communicate with messages containing parameter values from very large ranges. The approach infers compact models, and uses parameter values taken from a small portion of their ranges in membership queries. The fifth contribution is an approach to infer compact models of communication protocol entities which have a similar partitioning of an entity's behavior into control states as in a designer's model of the protocol.

    List of papers
    1. Model Checking
    Open this publication in new window or tab >>Model Checking
    2005 (English)In: Model-Based Testing of Reactive Systems: Advanced Lectures, Berlin / Heidelberg: Springer , 2005, p. 557-603Chapter in book (Other academic)
    Place, publisher, year, edition, pages
    Berlin / Heidelberg: Springer, 2005
    Series
    Lecture Notes in Computer Sciences: Programming and Software Engineering, ISSN 0302-9743 ; 3472
    National Category
    Computer and Information Sciences
    Identifiers
    urn:nbn:se:uu:diva-98082 (URN)978-3-540-26278-7 (ISBN)
    Available from: 2009-02-06 Created: 2009-02-06 Last updated: 2018-01-13Bibliographically approved
    2. Insights to Angluin's Learning
    Open this publication in new window or tab >>Insights to Angluin's Learning
    2005 (English)In: Electronical Notes in Theoretical Computer Science, ISSN 1571-0661, E-ISSN 1571-0661, Vol. 118, p. 3-18Article in journal (Refereed) Published
    Abstract [en]

    Among other domains, learning finite-state machines is important for obtaining a model of a system under development, so that powerful formal methods such as model checking can be applied.

    A prominent algorithm for learning such devices was developed by Angluin. We have implemented this algorithm in a straightforward way to gain further insights to practical applicability. Furthermore, we have analyzed its performance on randomly generated as well as real-world examples. Our experiments focus on the impact of the alphabet size and the number of states on the needed number of membership queries. Additionally, we have implemented and analyzed an optimized version for learning prefix-closed regular languages. Memory consumption is one major obstacle when we attempted to learn large examples.

    We see that prefix-closed languages are relatively hard to learn compared to arbitrary regular languages. The optimization, however, shows positive results.

    Keywords
    deterministic finite-state automata, learning algorithm, regular languages, prefix-closed regular languages
    National Category
    Computer and Information Sciences
    Identifiers
    urn:nbn:se:uu:diva-98083 (URN)10.1016/j.entcs.2004.12.015 (DOI)
    Available from: 2009-02-06 Created: 2009-02-06 Last updated: 2018-01-13Bibliographically approved
    3. Regular Inference for State Machines with Parameters
    Open this publication in new window or tab >>Regular Inference for State Machines with Parameters
    2006 (English)In: Fundamental approaches to software engineering: ( 9th international conference, FASE 2006, held as part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2006, Vienna, Austria, March 27-28, 2006 ), Berlin: Springer , 2006, p. 107-121Chapter in book (Other academic)
    Abstract [en]

    Techniques for inferring a regular language, in the form of a finite automaton, from a sufficiently large sample of accepted and nonaccepted input words, have been employed to construct models of software and hardware systems, for use, e.g., in test case generation. We intend to adapt these techniques to construct state machine models of entities of communication protocols. The alphabet of such state machines can be very large, since a symbol typically consists of a protocol data unit type with a number of parameters, each of which can assume many values. In typical algorithms for regular inference, the number of needed input words grows with the size of the alphabet and the size of the minimal DFA accepting the language. We therefore modify such an algorithm (Angluin's algorithm) so that its complexity grows not with the size of the alphabet, but only with the size of a certain symbolic representation of the DFA. The main new idea is to infer, for each state, a partitioning of input symbols into equivalence classes, under the hypothesis that all input symbols in an equivalence class have the same effect on the state machine. Whenever such a hypothesis is disproved, equivalence classes are refined. We show that our modification retains the good properties of Angluin's original algorithm, but that its complexity grows with the size of our symbolic DFA representation rather than with the size of the alphabet. We have implemented the algorithm; experiments on synthesized examples are consistent with these complexity results.

    Place, publisher, year, edition, pages
    Berlin: Springer, 2006
    Series
    Lecture notes in computer science, ISSN 0302-9743
    Keywords
    Test generation, Algorithm complexity, Modeling, Equivalence classes, Deterministic automaton, Data type, Transmission protocol, Finite automaton, Regular language, Inference, Software development
    National Category
    Computer and Information Sciences
    Identifiers
    urn:nbn:se:uu:diva-98084 (URN)3-540-33093-3 (ISBN)
    Available from: 2009-02-06 Created: 2009-02-06 Last updated: 2018-01-13Bibliographically approved
    4. Regular Inference for State Machines Using Domains with Equality Tests
    Open this publication in new window or tab >>Regular Inference for State Machines Using Domains with Equality Tests
    2008 (English)In: Fundamental Approaches to Software Engineering / [ed] Fiadeiro JL; Inverardi P, Berlin: Springer-Verlag , 2008, p. 317-331Conference paper, Published paper (Refereed)
    Abstract [en]

    Existing algorithms for regular inference (aka automata learning) allows to infer a finite state machine by observing the output that the machine produces in response to a selected sequence of input strings. We generalize regular inference techniques to infer a class of state machines with an infinite state space. We consider Mealy machines extended with state variables that can assume values from a potentially unbounded domain. These values can be passed as parameters in input and output symbols, and can be used in tests for equality between state variables and/or message parameters. This is to our knowledge the first extension of regular inference to infinite-state systems. We intend to use these techniques to generate models of communication protocols from observations of their input-output behavior. Such protocols often have parameters that represent node adresses, connection identifiers, etc. that have a large domain, and on which test for equality is the only meaningful operation. Our extension consists of two phases. In the first phase we apply an existing inference technique for finite-state Mealy machines to generate a model for the case that the values are taken from a small data domain. In the second phase we transform this finite-state Mealy machine into an infinite-state Mealy machine by folding it into a compact symbolic form.

    Place, publisher, year, edition, pages
    Berlin: Springer-Verlag, 2008
    Series
    Lecture Notes in Computer Science ; 4961
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-98085 (URN)10.1007/978-3-540-78743-3_24 (DOI)000254603000024 ()978-3-540-78742-6 (ISBN)
    Conference
    11th International Conference on Fundamental Approaches to Software Engineering Budapest, HUNGARY, MAR 29-APR 06, 2008
    Available from: 2009-02-06 Created: 2009-02-06 Last updated: 2018-01-13Bibliographically approved
    5. Regular Inference for Communication Protocol Entities
    Open this publication in new window or tab >>Regular Inference for Communication Protocol Entities
    2008 (English)Report (Other academic)
    Series
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2008-024
    Identifiers
    urn:nbn:se:uu:diva-98086 (URN)
    Available from: 2009-02-06 Created: 2009-02-06 Last updated: 2009-06-09Bibliographically approved
  • 16.
    Borgh, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Attribute-Based Encryption in Systems with Resource Constrained Devices in an Information Centric Networking Context2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    An extensive analysis of attribute-based encryption (ABE) in systems with resource constrained devices is performed. Two system solutions of how ABE can be performed in such systems are proposed, one where the ABE operations are performed at the resource constrained devices and one where ABE is performed at a powerful server. The system solutions are discussed with three different ABE schemes. Two of the schemes are the traditional key policy ABE (KP-ABE) and ciphertext policy ABE (CP-ABE). The third scheme is using KP-ABE to simulate CP-ABE, in an attempt to benefit from KP-ABE being computationally cheaper than CP-ABE while maintaining the intuitive way of using CP-ABE.

    ABE is a computationally expensive encryption method which might not be feasible to perform at the resource constrained sensors, depending on the hardware.

    An implementation of a CP-ABE scheme with a 128 bit security level was written and used to evaluate the feasibility of ABE on a sensor equipped with an ARM Cortex-M3 processor having 32 kB RAM and 256 kB flash. It is possible to perform CP-ABE on the sensor used in this project. The limiting factor of feasibility of ABE on the sensor is the RAM size. In this case policy sizes up to 12 attributes can be performed on the sensor.

    The results give an idea of the feasibility of encryption with ABE on sensors. In addition to the results several ways of improving performance of ABE on the sensor are discussed.

  • 17.
    Bälter Eronell, Sofia
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Lindvall, Lisa
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Framtagning av affärsmodell inom Internet of Things: En studie om hur ett IT-konsultbolag kan verka som integratör inom IoT-ekosystemet2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study examines how an Information Technology consulting firm can act as an integrator for the Internet of Things. The aim is to contribute to a greater understanding of how the IoT-ecosystem looks and what roles an integrator can take in collaboration with partners. In order to create a deeper understanding of the topic a qualitative study was conducted with Softhouse's partners, customers, and themselves, in order to place them within the IoT-ecosystem. The study focused on examining how IoT solutions can be implemented in the forestry industry. The results show that Softhouse has a great potential to offer IoT solutions by a solid collaboration with partners. They should focus on becoming experts in data analysis through training and recruitment. Selection of partners for different projects depends on its size, complexity

    and type. Through analysis and by using the business model Business Model Canvas it is possible to see which partners are most suitable for which type of project. This was applied to two such cases with clients in the forest industry; Södra Skog and APEA Mobile Security Solutions. 

  • 18.
    Cambazoglu, Volkan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Architecture and Computer Communication.
    Protocol, mobility and adversary models for the verification of security2016Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The increasing heterogeneity of communicating devices, ranging from resource constrained battery driven sensor nodes to multi-core processor computers, challenges protocol design. We examine security and privacy protocols with respect to exterior factors such as users, adversaries, and computing and communication resources; and also interior factors such as the operations, the interactions and the parameters of a protocol.

    Users and adversaries interact with security and privacy protocols, and even affect the outcome of the protocols. We propose user mobility and adversary models to examine how the location privacy of users is affected when they move relative to each other in specific patterns while adversaries with varying strengths try to identify the users based on their historical locations. The location privacy of the users are simulated with the support of the K-Anonymity protection mechanism, the Distortion-based metric, and our models of users' mobility patterns and adversaries' knowledge about users.

    Security and privacy protocols need to operate on various computing and communication resources. Some of these protocols can be adjusted for different situations by changing parameters. A common example is to use longer secret keys in encryption for stronger security. We experiment with the trade-off between the security and the performance of the Fiat–Shamir identification protocol. We pipeline the protocol to increase its utilisation as the communication delay outweighs the computation.

    A mathematical specification based on a formal method leads to a strong proof of security. We use three formal languages with their tool supports in order to model and verify the Secure Hierarchical In-Network Aggregation (SHIA) protocol for Wireless Sensor Networks (WSNs). The three formal languages specialise on cryptographic operations, distributed systems and mobile processes. Finding an appropriate level of abstraction to represent the essential features of the protocol in three formal languages was central.

    List of papers
    1. The impact of trace and adversary models on location privacy provided by K-anonymity
    Open this publication in new window or tab >>The impact of trace and adversary models on location privacy provided by K-anonymity
    2012 (English)In: Proc. 1st Workshop on Measurement, Privacy, and Mobility, New York: ACM Press, 2012, article id 6Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    New York: ACM Press, 2012
    National Category
    Computer Sciences
    Research subject
    Computer Science with specialization in Computer Communication
    Identifiers
    urn:nbn:se:uu:diva-171581 (URN)10.1145/2181196.2181202 (DOI)978-1-4503-1163-2 (ISBN)
    Conference
    MPM 2012
    Projects
    ProFuNWISENET
    Available from: 2012-04-10 Created: 2012-03-22 Last updated: 2018-01-12Bibliographically approved
    2. Towards adaptive zero-knowledge protocols: A case study with Fiat–Shamir identification protocol
    Open this publication in new window or tab >>Towards adaptive zero-knowledge protocols: A case study with Fiat–Shamir identification protocol
    2013 (English)In: Proc. 9th Swedish National Computer Networking Workshop, 2013, p. 67-70Conference paper, Published paper (Refereed)
    Abstract [en]

    Interactive zero-knowledge protocols are used as identification protocols. The protocols are executed in rounds, with security being increased with every round. This allows for a trade-off between security and performance to adapt the protocol to the requirements of the scenario. We experimentally investigate the Fiat–Shamir identification protocol on machines and networks with different performance characteristics. We find that the delay of the protocol highly depends on network latency and upload bandwidth. Computation time becomes more visible, when the protocol transmits little amount of data via a low latency network. We also experience that the impact of the sizes of the variables on the delay of the protocol is less than the number of rounds', which are interior factors in the protocol.

    National Category
    Computer Sciences
    Research subject
    Computer Science with specialization in Computer Communication
    Identifiers
    urn:nbn:se:uu:diva-201070 (URN)
    Conference
    SNCNW 2013
    Projects
    WISENETProFuN
    Available from: 2013-06-05 Created: 2013-06-05 Last updated: 2018-01-11Bibliographically approved
    3. Modelling and analysing a WSN secure aggregation protocol: A comparison of languages and tool support
    Open this publication in new window or tab >>Modelling and analysing a WSN secure aggregation protocol: A comparison of languages and tool support
    2015 (English)Report (Other academic)
    Series
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2015-033
    National Category
    Computer Sciences Communication Systems
    Research subject
    Computer Science with specialization in Computer Communication
    Identifiers
    urn:nbn:se:uu:diva-268453 (URN)
    Projects
    ProFuN
    Funder
    Swedish Foundation for Strategic Research , RIT08-0065
    Available from: 2015-12-03 Created: 2015-12-04 Last updated: 2018-01-10Bibliographically approved
  • 19.
    Carlström, Jakob
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Reinforcement learning for admission control and routing2000Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    When a user requests. a connection to another user or a computer in a communications network, a routing algorithm selects a path for transferring the resulting data stream. If all suitable paths are busy, the user request cannot beserved, and is blocked. A routing algorithm that minimizes this blocking probability results in satisfied users, and maximizes the network operator's revenue. In some cases, it may even be advantageous to block a request from one user, to make it possible to serve other users better. This thesis presents improved and partially new algorithms, based on reinforcement learning, which optimize the way a network is shared.

    A main contribution of the thesis is the development of algorithms thatadapt to arrivals of user requests that are correlated over time. These methodsare shown to increase network utilization in cases where the request arrivalprocesses are statistically self-similar. Another main contribution is gainscheduled routing, which reduces the computational cost associated withmaking routing decisions. The thesis also demonstrates how to integrate theconcept of max-min fairness into reinforcement learning routing.

  • 20.
    Cassel, Sofia
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Learning Component Behavior from Tests: Theory and Algorithms for Automata with Data2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Formal models are often used to describe the behavior of a computer program or component. Behavioral models have many different usages, e.g., in model-based techniques for software development and verification,such as model checking and model based testing.

    The application of model-based techniques is hampered by the current lack of adequate models for most actual systems, largely due to the significant manual effort typically needed to construct them. To remedy this, automata learning techniques (whereby models can be inferred by performing tests on a component) have been developed for finite automata that capture control flow. However, many usages requiremodels also to capture data flow, i.e., how behavior is affected by data values in method invocations and commands. Unfortunately, techniques are less developed for models that combinecontrol flow with data.

    In this thesis, we extend automata learning to infer automata models that captureboth control flow and data flow. We base our technique on a corresponding extension of classical automata theoryto capture data.

    We define a formalism for register automata, a model that extends finite automata by adding registers that can store data values and be used in guards and assignments on transitions. Our formalism is parameterized on a theory, i.e., a set of relations on a data domain. We present a Nerode congruence for the languages that our register automata can recognize, and provide a Myhill-Nerode theorem for constructing canonical register automata, thereby extending classical automata theory to register automata.

    We also present a learning algorithm for register automata: the new SL* algorithm, which extends the well-known L* algorithm for finite automata. The SL* algorithm is based on our new Nerode congruence, and uses a novel technique to infer symbolic data constraints on parameters. We evaluated our algorithm in a black-box scenario, inferring, e.g., the connection establishment phase of TCP and a priority queue, in addition to a number of smaller models. The SL* algorithm is implemented in a tool, which allows for use in more realistic settings, e.g., where models have both input and output, and for directly inferring Java classes.

    List of papers
    1. Inferring Canonical Register Automata
    Open this publication in new window or tab >>Inferring Canonical Register Automata
    2012 (English)In: Verification, Model Checking, and Abstract Interpretation - 13th International Conference, / [ed] Viktor Kuncak and Andrey Rybalchenko, Springer, 2012, p. 251-266Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper, we present an extension of active automata learning to register automata, an automaton model which is capable of expressing the influence of data on control flow. Register automata operate on an infinite data domain, whose values can be assigned to registers and compared for equality. Our active learning algorithm is unique in that it directly infers the effect of data values on control flow as part of the learning process. This effect is expressed by means of registers and guarded transitions in the resulting register automata models. The application of our algorithm to a small example indicates the impact of learning register automata models: Not only are the inferred models much more expressive than finite state machines, but the prototype implementation also drastically outperforms the classic L* algorithm, even when exploiting optimal data abstraction and symmetry reduction.

    Place, publisher, year, edition, pages
    Springer, 2012
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 7148
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-189969 (URN)10.1007/978-3-642-27940-9 (DOI)978-3-642-27939-3 (ISBN)
    Conference
    VMCAI 2012, Philadelphia, PA, USA, January 22-24, 2012.
    Projects
    ConnectUPMARC
    Available from: 2013-01-05 Created: 2013-01-05 Last updated: 2018-01-11
    2. RALib: A LearnLib extension for inferring EFSMs
    Open this publication in new window or tab >>RALib: A LearnLib extension for inferring EFSMs
    2015 (English)Conference paper, Published paper (Refereed)
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-265365 (URN)
    Conference
    DIFTS
    Available from: 2015-10-28 Created: 2015-10-28 Last updated: 2018-01-10
    3. Active Learning for Extended Finite State Machines
    Open this publication in new window or tab >>Active Learning for Extended Finite State Machines
    2015 (English)Report (Other academic)
    Series
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2015-032
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-265368 (URN)
    Available from: 2015-10-28 Created: 2015-10-28 Last updated: 2018-01-10
    4. A succinct canonical register automaton model
    Open this publication in new window or tab >>A succinct canonical register automaton model
    Show others...
    2015 (English)In: Journal of Logical and Algebraic Methods in Programming, ISSN 2352-2208, Vol. 84, no 1, p. 54-66Article in journal (Refereed) Published
    Keywords
    register automata, data languages, canonical model, Myhill–Nerode, automata theory
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-237501 (URN)10.1016/j.jlamp.2014.07.004 (DOI)000347601600005 ()
    Conference
    23rd Nordic Workshop on Programming Theory (NWPT 2011)
    Projects
    Connect
    Available from: 2014-08-04 Created: 2014-12-03 Last updated: 2018-01-11Bibliographically approved
  • 21.
    Ceballos, Germán
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Architecture and Computer Communication.
    Modeling the interactions between tasks and the memory system2017Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Making computer systems more energy efficient while obtaining the maximum performance possible is key for future developments in engineering, medicine, entertainment, etc. However it has become a difficult task due to the increasing complexity of hardware and software, and their interactions. For example, developers have to deal with deep, multi-level cache hierarchies on modern CPUs, and keep busy thousands of cores in GPUs, which makes the programming process more difficult.

    To simplify this task, new abstractions and programming models are becoming popular. Their goal is to make applications more scalable and efficient, while still providing the flexibility and portability of old, widely adopted models. One example of this is task-based programming, where simple independent tasks (functions) are delegated to a runtime system which orchestrates their execution. This approach has been successful because the runtime can automatically distribute work across hardware cores and has the potential to minimize data movement and placement (e.g., being aware of the cache hierarchy).

    To build better runtime systems, it is crucial to understand bottlenecks in the performance of current and future multicore systems. In this thesis, we provide fast, accurate and mathematically-sound models and techniques to understand the execution of task-based applications concerning three key aspects: memory behavior (data locality), scheduling, and performance. With these methods, we lay the groundwork for improving runtime system, providing insight into the interplay between the schedule's behavior, data reuse through the cache hierarchy, and the resulting performance.

    List of papers
    1. Shared Resource Sensitivity in Task-Based Runtime Systems
    Open this publication in new window or tab >>Shared Resource Sensitivity in Task-Based Runtime Systems
    2013 (English)In: Proc. 6th Swedish Workshop on Multi-Core Computing, Halmstad University Press, 2013Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Halmstad University Press, 2013
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:uu:diva-212780 (URN)
    Conference
    MCC13, November 25–26, Halmstad, Sweden
    Projects
    Resource Sharing ModelingUPMARC
    Funder
    Swedish Research Council
    Available from: 2013-12-13 Created: 2013-12-13 Last updated: 2018-01-26Bibliographically approved
    2. Formalizing data locality in task parallel applications
    Open this publication in new window or tab >>Formalizing data locality in task parallel applications
    2016 (English)In: Algorithms and Architectures for Parallel Processing, Springer, 2016, p. 43-61Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Springer, 2016
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 10049
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-310341 (URN)10.1007/978-3-319-49956-7_4 (DOI)000389797000004 ()978-3-319-49955-0 (ISBN)
    Conference
    ICA3PP 2016, December 14–16, Granada, Spain
    Projects
    UPMARCResource Sharing Modeling
    Funder
    Swedish Foundation for Strategic Research , FFL12-0051
    Available from: 2016-11-19 Created: 2016-12-14 Last updated: 2018-01-26Bibliographically approved
    3. TaskInsight: Understanding task schedules effects on memory and performance
    Open this publication in new window or tab >>TaskInsight: Understanding task schedules effects on memory and performance
    2017 (English)In: Proc. 8th International Workshop on Programming Models and Applications for Multicores and Manycores, New York: ACM Press, 2017, p. 11-20Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    New York: ACM Press, 2017
    National Category
    Computer Engineering
    Identifiers
    urn:nbn:se:uu:diva-315033 (URN)10.1145/3026937.3026943 (DOI)978-1-4503-4883-6 (ISBN)
    Conference
    PMAM 2017, February 4–8, Austin, TX
    Projects
    UPMARCResource Sharing Modeling
    Funder
    Swedish Research CouncilSwedish Foundation for Strategic Research , FFL12-0051EU, Horizon 2020, 687698
    Available from: 2017-02-04 Created: 2017-02-08 Last updated: 2018-01-26Bibliographically approved
    4. Analyzing performance variation of task schedulers with TaskInsight
    Open this publication in new window or tab >>Analyzing performance variation of task schedulers with TaskInsight
    2018 (English)In: Parallel Computing, ISSN 0167-8191, E-ISSN 1872-7336, Vol. 75, p. 11-27Article in journal (Refereed) Published
    National Category
    Computer Engineering
    Identifiers
    urn:nbn:se:uu:diva-340202 (URN)10.1016/j.parco.2018.02.003 (DOI)
    Projects
    UPMARCResource Sharing Modeling
    Available from: 2018-02-22 Created: 2018-01-26 Last updated: 2018-03-03Bibliographically approved
  • 22.
    D'Angelo, Laura
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Evaluation of code generation in agile software development of embedded systems2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Generating code from software models is considered to be a new generation leap within software development methods. The objective of this M.Sc. project is to evaluate how different approaches to modelling and code generation affect embedded systems software development and propose recommendations on how to improve software development. Two product areas at Saab Surveillance EW Systems in Järfälla, Stockholm, are used as study objects.

    A research overview is made to highlight themes regarding modelling, code generation and software development in general. Based on these, interviews are held with system engineers and software developers at each product area, where they use different modelling and code generation approaches. The two development processes are described thoroughly. Challenges and advantages related to each area’s approach are investigated.

    Software development within product area A is affected by the product complexity and the larger scale of the development, including projects running over a longer time with more teams involved. Recommendations include enabling code generation by aligning it with other investments on process improvement and limiting the approach to generating some system components. Software developers within product area B can use full code generation, enabled by the limited product complexity. The product area is affected by software standards and external requirements on the process. Recommendations include extending the modelling approach to make it easier to trace functionality from system to software level. Conclusions are that both product areas can apply modelling and code generation to more software development activities to improve their respective development processes.

  • 23.
    Daniels, Mats
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Developing and Assessing Professional Competencies: a Pipe Dream?: Experiences from an Open-Ended Group Project Learning Environment2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Professional competencies are explicitly identified in the primary learning outcomes for science and engineering degrees at many tertiary institutions.  Fulfillment of the requirements to equip our students with these skills, while formally acknowledged as important by all stakeholders, can be hard to demonstrate in practice.  Most degree awarding institutions would have difficulties if asked to document where in degree programs such competencies are developed.

    The work in this thesis addresses the issue of professional competencies from several angles.  The Open-Ended Group Project (OEGP) concept is introduced and proposed as an approach to constructing learning environments in which students’ development of professional competencies can be stimulated and assessed.  Scholarly, research-based development of the IT in Society course unit (ITiS) is described and analyzed in order to present ideas for tailoring OEGP-based course units towards meeting learning objectives related to professional competence.  Work in this thesis includes an examination of both the meanings attributed to the term professional competencies, and methods which can be used to assess the competencies once they are agreed on.

    The empirical work on developing ITiS is based on a framework for educational research, which has been both refined and extended as an integral part of my research.  The action research methodology is presented and concrete examples of implementations of different pedagogical interventions, based on the methodology, are given.  The framework provides support for relating a theoretical foundation to studies, or development, of learning environments.  The particular theoretical foundation for the examples in this thesis includes, apart from the action research methodology, constructivism, conceptual change, threshold concepts, communities of practice, ill-structured problem solving, the reflective practicum, and problem based learning.

    The key finding in this thesis is that development and assessment of professional competencies is not a pipe dream.  Assessment can be accomplished, and the OEGP concept provides a flexible base for creating an appropriate learning environment for this purpose.

    List of papers
    1. Reflections on International Projects in the Undergraduate CS Education
    Open this publication in new window or tab >>Reflections on International Projects in the Undergraduate CS Education
    1999 (English)In: Computer Science Education, Vol. 9, no 3, p. 256-267Article in journal (Refereed) Published
    Abstract [en]

    Educational methods race to keep pace with the opportunities afforded by technology, and in computer science we need methods that tie together the diverse aspects of the discipline and set them in a realistic human context. Projects have the potential to address various aims and perspectives, and international projects add new dimensions to student teamwork, requiring students to handle collaboration that is remote, cross-cultural, and linguistically challenging. This paper examines some of the educational issues associated with international projects, in the discussion of two examples: PASTICS and Runestone.

    National Category
    Computer Engineering
    Identifiers
    urn:nbn:se:uu:diva-40502 (URN)
    Available from: 2008-11-26 Created: 2008-11-26 Last updated: 2018-01-11
    2. Open Ended Group Projects (OEGP): A way of including diversity in the IT curriculum
    Open this publication in new window or tab >>Open Ended Group Projects (OEGP): A way of including diversity in the IT curriculum
    2006 (English)In: Diversity in information technology education: Issues and controversities, Information Science Publishing, London , 2006, p. 166-195Chapter in book (Refereed)
    Abstract [en]

    Modern societies are now beginning to accept that their citizens are diverse but, arguably, have not yet faced up to the challenges of diversity. Schools and universities thus have a role to play in equipping students for the diverse society in which they will live and work. IT students in particular need to appreciate the diversity of society as they specify, design, build and evaluate systems for a wide range of people. This chapter examines the concept of the Open Ended Group Project (OEGP) and uses examples to demonstrate that OEGP forms an effective technique for encouraging students to work together in diverse teams. The appropriateness of OEGP as a means of addressing diversity in the curriculum is examined and it is concluded that OEGP offers a suitable means of enabling students to develop strategies for accommodating diversity in both their future working life and the wider society.

    Place, publisher, year, edition, pages
    Information Science Publishing, London, 2006
    Keywords
    Computing education
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-77351 (URN)1-59140-742-7 (ISBN)
    Available from: 2006-03-14 Created: 2006-03-14 Last updated: 2018-01-13
    3. Experiences from using Constructive Controversy in an Open Ended Group Project
    Open this publication in new window or tab >>Experiences from using Constructive Controversy in an Open Ended Group Project
    2010 (English)In: Proc. 40th ASEE/IEEE Frontiers in Education Conference, Piscataway, NJ: IEEE , 2010, p. S3D-1-5Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Piscataway, NJ: IEEE, 2010
    National Category
    Computer Sciences Didactics
    Identifiers
    urn:nbn:se:uu:diva-139736 (URN)10.1109/FIE.2010.5673418 (DOI)978-1-4244-6261-2 (ISBN)
    Available from: 2010-12-23 Created: 2010-12-29 Last updated: 2018-01-12Bibliographically approved
    4. Engineering Education Research in Practice: Evolving use of open ended group projects as a pedagogical strategy for developing skills in global collaboration
    Open this publication in new window or tab >>Engineering Education Research in Practice: Evolving use of open ended group projects as a pedagogical strategy for developing skills in global collaboration
    2010 (English)In: International journal of engineering education, ISSN 0949-149X, Vol. 26, no 4, p. 795-806Article in journal (Refereed) Published
    Abstract [en]

    Globalization presents engineering educators with new challenges as they face the need for graduates who can function comfortably in an increasingly distributed team context which crosses country and cultural boundaries. Scaffolding learners to acquire professional attributes which transcend the solely technical places stress on traditional curriculum models. This paper analyses an Open Ended Group Project Framework (OEGP) situated in an action research program applied within the IT in Society course at Uppsala University. The approach results in conscious evolution of the course as an integral element of its design. It enables flexible planned educational change informed by a combination of learning theories and stakeholder input. In this paper we discuss the role of the research program in addressing the educational challenges we faced assisting students to develop global collaboration skills. The implications of combining this course with one at a partner institution in the USA and developing a global collaboration are also addressed. The paper concludes by summarizing the benefits of adopting an integrated action research and OEGP framework to support flexible course delivery in a global professional engineering context.

    Keywords
    Open Ended Group Projects, global collaboration, action research, engineering education research
    National Category
    Computer Sciences
    Research subject
    Computer Science with specialization in Computer Science Education Research
    Identifiers
    urn:nbn:se:uu:diva-112983 (URN)000282010500007 ()
    Available from: 2010-01-25 Created: 2010-01-25 Last updated: 2018-01-12Bibliographically approved
    5. Assessing professional skills in engineering education
    Open this publication in new window or tab >>Assessing professional skills in engineering education
    2011 (English)In: Australian Computer Science Communications, ISSN 0157-3055, Vol. 33, no 2, p. 145-154Article in journal (Refereed) Published
    National Category
    Computer Sciences Didactics
    Identifiers
    urn:nbn:se:uu:diva-145999 (URN)
    Available from: 2011-01-20 Created: 2011-02-14 Last updated: 2018-01-12Bibliographically approved
  • 24.
    Davari, Mahdad
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Architecture and Computer Communication.
    Advances Towards Data-Race-Free Cache Coherence Through Data Classification2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Providing a consistent view of the shared memory based on precise and well-defined semantics—memory consistency model—has been an enabling factor in the widespread acceptance and commercial success of shared-memory architectures. Moreover, cache coherence protocols have been employed by the hardware to remove from the programmers the burden of dealing with the memory inconsistency that emerges in the presence of the private caches. The principle behind all such cache coherence protocols is to guarantee that consistent values are read from the private caches at all times.

    In its most stringent form, a cache coherence protocol eagerly enforces two invariants before each data modification: i) no other core has a copy of the data in its private caches, and ii) all other cores know where to receive the consistent data should they need the data later. Nevertheless, by partly transferring the responsibility for maintaining those invariants to the programmers, commercial multicores have adopted weaker memory consistency models, namely the Total Store Order (TSO), in order to optimize the performance for more common cases.

    Moreover, memory models with more relaxed invariants have been proposed based on the observation that more and more software is written in compliance with the Data-Race-Free (DRF) semantics. The semantics of DRF software can be leveraged by the hardware to infer when data in the private caches might be inconsistent. As a result, hardware ignores the inconsistent data and retrieves the consistent data from the shared memory. DRF semantics therefore removes from the hardware the burden of eagerly enforcing the strong consistency invariants before each data modification. Instead, consistency is guaranteed only when needed. This results in manifold optimizations, such as reducing the energy consumption and improving the performance and scalability. The efficiency of detecting and discarding the inconsistent data is an important factor affecting the efficiency of such coherence protocols. For instance, discarding the consistent data does not affect the correctness, but results in performance loss and increased energy consumption.

    In this thesis we show how data classification can be leveraged as an effective tool to simplify the cache coherence based on the DRF semantics. In particular, we introduce simple but efficient hardware-based private/shared data classification techniques that can be used to efficiently detect the inconsistent data, thus enabling low-overhead and scalable cache coherence solutions based on the DRF semantics.

    List of papers
    1. Hierarchical private/shared classification: The key to simple and efficient coherence for clustered cache hierarchies
    Open this publication in new window or tab >>Hierarchical private/shared classification: The key to simple and efficient coherence for clustered cache hierarchies
    2015 (English)In: Proc. 21st International Symposium on High Performance Computer Architecture, IEEE Computer Society Digital Library, 2015, p. 186-197Conference paper, Published paper (Refereed)
    Abstract [en]

    Hierarchical clustered cache designs are becoming an appealing alternative for multicores. Grouping cores and their caches in clusters reduces network congestion by localizing traffic among several hierarchical levels, potentially enabling much higher scalability. While such architectures can be formed recursively by replicating a base design pattern, keeping the whole hierarchy coherent requires more effort and consideration. The reason is that, in hierarchical coherence, even basic operations must be recursive. As a consequence, intermediate-level caches behave both as directories and as leaf caches. This leads to an explosion of states, protocol-races, and protocol complexity. While there have been previous efforts to extend directory-based coherence to hierarchical designs their increased complexity and verification cost is a serious impediment to their adoption. We aim to address these concerns by encapsulating all hierarchical complexity in a simple function: that of determining when a data block is shared entirely within a cluster (sub-tree of the hierarchy) and is private from the outside. This allows us to eliminate complex recursive operations that span the hierarchy and instead employ simple coherence mechanisms such as self-invalidation and write-through-now restricted to operate within the cluster where a data block is shared. We examine two inclusivity options and discuss the relation of our approach to the recently proposed Hierarchical-Race-Free (HRF) memory models. Finally, comparisons to a hierarchical directory-based MOESI, VIPS-M, and TokenCMP protocols show that, despite its simplicity our approach results in competitive performance and decreased network traffic.

    Place, publisher, year, edition, pages
    IEEE Computer Society Digital Library, 2015
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:uu:diva-265651 (URN)10.1109/HPCA.2015.7056032 (DOI)000380564900016 ()9781479989300 (ISBN)
    External cooperation:
    Conference
    HPCA 2015, February 7–11, Burlingame, CA
    Available from: 2015-02-11 Created: 2015-11-02 Last updated: 2017-04-22Bibliographically approved
    2. The effects of granularity and adaptivity on private/shared classification for coherence
    Open this publication in new window or tab >>The effects of granularity and adaptivity on private/shared classification for coherence
    2015 (English)In: ACM Transactions on Architecture and Code Optimization (TACO), ISSN 1544-3566, E-ISSN 1544-3973, Vol. 12, no 3, article id 26Article in journal (Refereed) Published
    Abstract [en]

    Classification of data into private and shared has proven to be a catalyst for techniques to reduce coherence cost, since private data can be taken out of coherence and resources can be concentrated on providing coherence for shared data. In this article, we examine how granularity-page-level versus cache-line level- and adaptivity-going from shared to private-affect the outcome of classification and its final impact on coherence. We create a classification technique, called Generational Classification, and a coherence protocol called Generational Coherence, which treats data as private or shared based on cache-line generations. We compare two coherence protocols based on self-invalidation/self-downgrade with respect to data classification. Our findings are enlightening: (i) Some programs benefit from finer granularity, some benefit further from adaptivity, but some do not benefit from either. (ii) Reducing the amount of shared data has no perceptible impact on coherence misses caused by self-invalidation of shared data, hence no impact on performance. (iii) In contrast, classifying more data as private has implications for protocols that employ write-through as a means of self-downgrade, resulting in network traffic reduction-up to 30%-by reducing write-through traffic.

    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:uu:diva-265580 (URN)10.1145/2790301 (DOI)000363004100001 ()
    Available from: 2015-10-06 Created: 2015-11-02 Last updated: 2017-12-01Bibliographically approved
    3. An efficient, self-contained, on-chip directory: DIR1-SISD
    Open this publication in new window or tab >>An efficient, self-contained, on-chip directory: DIR1-SISD
    2015 (English)In: Proc. 24th International Conference on Parallel Architectures and Compilation Techniques, IEEE Computer Society, 2015, p. 317-330Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    IEEE Computer Society, 2015
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:uu:diva-265611 (URN)10.1109/PACT.2015.23 (DOI)000378942700027 ()978-1-4673-9524-3 (ISBN)
    Conference
    PACT 2015, October 18–21, San Francisco, CA
    Available from: 2015-11-02 Created: 2015-11-02 Last updated: 2017-04-22Bibliographically approved
    4. Scope-Aware Classification: Taking the hierarchical private/shared data classification to the next level
    Open this publication in new window or tab >>Scope-Aware Classification: Taking the hierarchical private/shared data classification to the next level
    2017 (English)Report (Other academic)
    Series
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2017-008
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:uu:diva-320324 (URN)
    Available from: 2017-04-27 Created: 2017-04-19 Last updated: 2017-07-03Bibliographically approved
    5. The best of both works: A hybrid data-race-free cache coherence scheme
    Open this publication in new window or tab >>The best of both works: A hybrid data-race-free cache coherence scheme
    2017 (English)Report (Other academic)
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:uu:diva-320320 (URN)
    Available from: 2017-04-19 Created: 2017-04-19 Last updated: 2017-11-15
  • 25.
    David, Alexandre
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Hierarchical Modeling and Analysis of Real Time Systems2003Doctoral thesis, monograph (Other academic)
    Abstract [en]

    UPPAAL is a tool for model-checking real-time systems developed jointly by Uppsala University and Aalborg University. It has been applied successfully in case studies ranging from communication protocols to multimedia applications. The tool is designed to verify systems that can be modeled as networks of timed automata. But it lacks support for systems with hierarchical structures, which makes the construction of large models difficult. In this thesis we improve the efficiency of UPPAAL with new data structures and extend its modeling language and its engine to support hierarchical constructs.

    To investigate the limits of UPPAAL, we model and analyze an industrial fieldbus communication protocol. To our knowledge, this case study is the largest application UPPAAL has been confronted to and we managed to verify the models. However, the hierarchical structure of the protocol is encoded as a network of automata without hierarchy, which artificially complicates the model. It turns out that we need to improve performance and enrich the modeling language.

    To attack the performance bottlenecks, we unify the two central structures of the UPPAAL engine, the passed and waiting lists, and improve memory management to take advantage of data sharing between states. We present experimental results that demonstrate improvements by a factor 2 in time consumption and a factor 5 in memory consumption.

    We enhance the modeling capabilities of UPPAAL by extending its input language with hierarchical constructs to structure the models. We have developed a verification engine that supports modeling of hierarchical systems without penalty in performance. To further benefit from the structures of models, we present an approximation technique that utilizes hierarchy in verification.

    Finally, we propose a new architecture to integrate the different verification techniques into a common framework. It is designed as a pipeline built with components that are changed to fit particular experimental configurations and to add new features. The new engine of UPPAAL is based on this architecture. We believe that the architecture is applicable to other verification tools.

  • 26.
    David, Alexandre
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Practical verification of real-time systems2001Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    Formal methods are becoming mature enough to be used on nontrivial examples. They are particularly well fitted for real-time systems whose correctness is defined in terms of correct responses at correct times. Most common real-time systems are of reasonable size and can therefore be handled by an automatic verification tool such as Uppaal. Unfortunately the application of such techniques is not widely spread.

    This thesis presents advances in making formal techniques more accessable technology for system development and analysis. As the first contribution, we report on an industrial case study to show that model checkers can be used for debugging and error localization. We shall present a number of abstraction techniques applied in the case study to avoid the state explosion problem. As the second contribution, we have developed a hierarchical extension of timed automata to enable more structured, compact, and more complex descriptions of systems by the users. Such a hierarchical representation is better suited for abstraction and expected to give better searching algorithms. Finally we present a hybrid animation system serving as a plug-in module for model-checkers to improve features for modelling and simulation.

  • 27.
    Deneux, Johann
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Verification of Parameterized and Timed Systems: Undecidability Results and Efficient Methods2006Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software is finding its way into an increasing range of devices (phones, medical equipment, cars...). A challenge is to design verification methods to ensure correctness of software.

    We focus on model checking, an approach in which an abstract model of the implementation and a specification of requirements are provided. The task is to answer automatically whether the system conforms with its specification.We concentrate on (i) timed systems, and (ii) parameterized systems.

    Timed systems can be modeled and verified using the classical model of timed automata. Correctness is translated to language inclusion between two timed automata representing the implementation and the specification. We consider variants of timed automata, and show that the problem is at best highly complex, at worst undecidable.

    A parameterized system contains a variable number of components. The problem is to verify correctness regardless of the number of components. Regular model checking is a prominent method which uses finite-state automata. We present a semi-symbolic minimization algorithm combining the partition refinement algorithm by Paige and Tarjan with decision diagrams.

    Finally, we consider systems which are both timed and parameterized: Timed Petri Nets (TPNs), and Timed Networks (TNs). We present a method for checking safety properties of TPNs based on forward reachability analysis with acceleration. We show that verifying safety properties of TNs is undecidable when each process has at least two clocks, and explore decidable variations of this problem.

    List of papers
    1. Decidability and Complexity Results for Timed Automata via Channel Machines
    Open this publication in new window or tab >>Decidability and Complexity Results for Timed Automata via Channel Machines
    2005 In: LNCS, Vol. 3580Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-94524 (URN)
    Available from: 2006-05-12 Created: 2006-05-12Bibliographically approved
    2. Minimization of Non-deterministic Automata with Large Alphabets
    Open this publication in new window or tab >>Minimization of Non-deterministic Automata with Large Alphabets
    2006 (English)In: Implementation and Application of Automata, Springer Berlin/Heidelberg, 2006, p. 31-42Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Springer Berlin/Heidelberg, 2006
    Series
    Lecture Notes in Computer Science ; 3845
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-94525 (URN)10.1007/11605157_3 (DOI)3-540-31023-1 (ISBN)
    Conference
    CIAA 2005, June 27-29, Sophia Antipolis, France
    Available from: 2006-05-12 Created: 2006-05-12 Last updated: 2018-01-13Bibliographically approved
    3. Networks of Identical Multi-Clock Timed Processes
    Open this publication in new window or tab >>Networks of Identical Multi-Clock Timed Processes
    Article in journal (Refereed) Submitted
    Identifiers
    urn:nbn:se:uu:diva-94526 (URN)
    Available from: 2006-05-12 Created: 2006-05-12Bibliographically approved
    4. Forward Reachability Analysis of Timed Petri Nets
    Open this publication in new window or tab >>Forward Reachability Analysis of Timed Petri Nets
    2004 (English)In: FORMATS-FTRTFT'04, 2004, 2004Conference paper, Published paper (Refereed)
    National Category
    Natural Sciences
    Identifiers
    urn:nbn:se:uu:diva-94527 (URN)
    Conference
    Joint Conference Formal Modelling and Analysis of Timed Systems (FORMATS) and Formal Techniques in Real-Time and Fault Tolerant System (FTRTFT), September 22-24, 2004, Grenoble, France
    Available from: 2006-05-12 Created: 2006-05-12 Last updated: 2015-09-30Bibliographically approved
  • 28.
    d'Orso, Julien
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    New Directions in Symbolic Model Checking2003Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In today's computer engineering, requirements for generally high reliability have pushed the notion of testing to its limits. Many disciplines are moving, or have already moved, to more formal methods to ensure correctness. This is done by comparing the behavior of the system as it is implemented against a set of requirements. The ultimate goal is to create methods and tools that are able to perform this kind of verfication automatically: this is called Model Checking.

    Although the notion of model checking has existed for two decades, adoption by the industry has been hampered by its poor applicability to complex systems. During the 90's, researchers have introduced an approach to cope with large (even infinite) state spaces: Symbolic Model Checking. The key notion is to represent large (possibly infinite) sets of states by a small formula (as opposed to enumerating all members). In this thesis, we investigate applying symbolic methods to different types of systems:

    Parameterized systems. We work whithin the framework of Regular Model Chacking. In regular model checking, we represent a global state as a word over a finite alphabet. A transition relation is represented by a regular length-preserving transducer. An important operation is the so-called transitive closure, which characterizes composing a transition relation with itself an arbitrary number of times. Since completeness cannot be achieved, we propose methods of computing closures that work as often as possible.

    Games on infinite structures. Infinite-state systems for which the transition relation is monotonic with respect to a well quasi-ordering on states can be analyzed. We lift the framework of well quasi-ordered domains toward games. We show that monotonic games are in general undecidable. We identify a subclass of monotonic games: downward-closed games. We propose an algorithm to analyze such games with a winning condition expressed as a safety property.

    Probabilistic systems. We present a framework for the quantitative analysis of probabilistic systems with an infinite state-space: given an initial state sinit, a set F of final states, and a rational Θ > 0, compute a rational ρ such that the probability of reaching F form sinit is between ρ and ρ + Θ. We present a generic algorithm and sufficient conditions for termination.

  • 29.
    Edström, Rickard
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Internet caching - sizing and placement for dynamic content2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Traffic volumes on the Internet continue to increase, with the same links carrying the same data multiple times. In-network caching can alleviate much of the redundant traffic by storing popular items close to the users. This was a master thesis project that involved building on existing research and simulators, simulating a multi-level hierarchical network cache system and its resulting performance, with varying parameters such as placement and sizing of the individual cache nodes. One of the goals of the thesis work was to improve and integrate the simulation frameworks used as a starting point. Another goal was to run simulations with the improved framework and shed light on how a high Quality of Experience (QoE) can be achieved within this kind of cache system, varying the input parameters. An improved and integrated simulation framework was produced, including improved visualization capabilities. Using this improved simulation framework, the behavior of the cache system was studied, in particular how the system behaves with static and dynamic cache sizing approaches. Conclusions drawn are e.g. that the dynamic sizing approach deployed can be a good way to achieve a high QoE. Finally, future research opportunities are identified.

  • 30.
    Ekberg, Pontus
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Models and Complexity Results in Real-Time Scheduling Theory2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    When designing real-time systems, we want to prove that they will satisfy given timing constraints at run time. The main objective of real-time scheduling theory is to analyze properties of mathematical models that capture the temporal behaviors of such systems. These models typically consist of a collection of computational tasks, each of which generates an infinite sequence of task activations. In this thesis we study different classes of models and their corresponding analysis problems.

    First, we consider models of mixed-criticality systems. The timing constraints of these systems state that all tasks must meet their deadlines for the run-time scenarios fulfilling certain assumptions, for example on execution times. For the other scenarios, only the most important tasks must meet their deadlines. We study both tasks with sporadic activation patterns and tasks with complicated activation patterns described by arbitrary directed graphs. We present sufficient schedulability tests, i.e., methods used to prove that a given collection of tasks will meet their timing constraints under a particular scheduling algorithm.

    Second, we consider models where tasks can lock mutually exclusive resources and have activation patterns described by directed cycle graphs. We present an optimal scheduling algorithm and an exact schedulability test.

    Third, we address a pair of longstanding open problems in real-time scheduling theory. These concern the computational complexity of deciding whether a collection of sporadic tasks are schedulable on a uniprocessor. We show that this decision problem is strongly coNP-complete in the general case. In the case where the asymptotic resource utilization of the tasks is bounded by a constant smaller than 1, we show that it is weakly coNP-complete.

    List of papers
    1. Bounding and shaping the demand of generalized mixed-criticality sporadic task systems
    Open this publication in new window or tab >>Bounding and shaping the demand of generalized mixed-criticality sporadic task systems
    2014 (English)In: Real-time systems, ISSN 0922-6443, E-ISSN 1573-1383, Vol. 50, no 1, p. 48-86Article in journal (Refereed) Published
    Abstract [en]

    We generalize the commonly used mixed-criticality sporadic task model to let all task parameters (execution-time, deadline and period) change between criticality modes. In addition, new tasks may be added in higher criticality modes and the modes may be arranged using any directed acyclic graph, where the nodes represent the different criticality modes and the edges the possible mode switches. We formulate demand bound functions for mixed-criticality sporadic tasks and use these to determine EDF-schedulability. Tasks have different demand bound functions for each criticality mode. We show how to shift execution demand between different criticality modes by tuning the relative deadlines. This allows us to shape the demand characteristics of each task. We propose efficient algorithms for tuning all relative deadlines of a task set in order to shape the total demand to the available supply of thecomputing platform. Experiments indicate that this approach is successful in practice. This new approach has the added benefit of supporting hierarchical scheduling frameworks.

    National Category
    Computer Sciences
    Research subject
    Computer Science with specialization in Real Time Systems
    Identifiers
    urn:nbn:se:uu:diva-212779 (URN)10.1007/s11241-013-9187-z (DOI)000328351200003 ()
    Projects
    UPMARC
    Available from: 2013-06-15 Created: 2013-12-13 Last updated: 2018-01-11Bibliographically approved
    2. Schedulability analysis of a graph-based task model for mixed-criticality systems
    Open this publication in new window or tab >>Schedulability analysis of a graph-based task model for mixed-criticality systems
    2016 (English)In: Real-time systems, ISSN 0922-6443, E-ISSN 1573-1383, Vol. 52, no 1, p. 1-37Article in journal (Refereed) Published
    National Category
    Computer Sciences
    Research subject
    Computer Science with specialization in Real Time Systems
    Identifiers
    urn:nbn:se:uu:diva-265781 (URN)10.1007/s11241-015-9225-0 (DOI)000370819700001 ()
    Available from: 2015-05-01 Created: 2015-11-03 Last updated: 2018-01-10Bibliographically approved
    3. An optimal resource sharing protocol for generalized multiframe tasks
    Open this publication in new window or tab >>An optimal resource sharing protocol for generalized multiframe tasks
    2015 (English)In: The Journal of logical and algebraic methods in programming, ISSN 2352-2208, E-ISSN 2352-2216, Vol. 84, no 1, p. 92-105Article in journal (Refereed) Published
    National Category
    Computer Engineering
    Research subject
    Computer Science with specialization in Real Time Systems
    Identifiers
    urn:nbn:se:uu:diva-235474 (URN)10.1016/j.jlamp.2014.10.001 (DOI)000347601600007 ()
    Projects
    UPMARC
    Available from: 2014-10-16 Created: 2014-11-04 Last updated: 2018-01-11Bibliographically approved
    4. Uniprocessor feasibility of sporadic tasks with constrained deadlines is strongly coNP-complete
    Open this publication in new window or tab >>Uniprocessor feasibility of sporadic tasks with constrained deadlines is strongly coNP-complete
    2015 (English)In: Proc. 27th Euromicro Conference on Real-Time Systems, Piscataway, NJ: IEEE, 2015, p. 281-286Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Piscataway, NJ: IEEE, 2015
    National Category
    Computer Sciences
    Research subject
    Computer Science with specialization in Real Time Systems
    Identifiers
    urn:nbn:se:uu:diva-265783 (URN)10.1109/ECRTS.2015.32 (DOI)000375052900025 ()978-1-4673-7570-2 (ISBN)
    Conference
    ECRTS 2015, July 7–10, Lund, Sweden
    Projects
    UPMARC
    Available from: 2015-08-06 Created: 2015-11-03 Last updated: 2018-01-10Bibliographically approved
    5. Uniprocessor feasibility of sporadic tasks remains coNP-complete under bounded utilization
    Open this publication in new window or tab >>Uniprocessor feasibility of sporadic tasks remains coNP-complete under bounded utilization
    2015 (English)In: Proc. 36th Real-Time Systems Symposium, IEEE Computer Society, 2015, p. 87-95Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    IEEE Computer Society, 2015
    National Category
    Computer Sciences
    Research subject
    Computer Science with specialization in Real Time Systems
    Identifiers
    urn:nbn:se:uu:diva-265784 (URN)10.1109/RTSS.2015.16 (DOI)000380424600009 ()978-1-4673-9507-6 (ISBN)
    Conference
    RTSS 2015, December 1–4, San Antonio, TX
    Projects
    UPMARC
    Available from: 2016-01-18 Created: 2015-11-03 Last updated: 2018-01-10Bibliographically approved
  • 31.
    Eklöv, David
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Efficient methods for application performance analysis2011Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    To reduce latency and increase bandwidth to memory, modern microprocessors are designed with deep memory hierarchies including several levels of caches. For such microprocessors, the service time for fetching data from off-chip memory is about two orders of magnitude longer than fetching data from the level-one cache. Consequently, the performance of applications is largely determined by how well they utilize the caches in the memory hierarchy, captured by their miss ratio curves. However, efficiently obtaining an application's miss ratio curve and interpreting its performance implications is hard. This task becomes even more challenging when analyzing application performance on multicore processors where several applications/threads share caches and memory bandwidths. To accomplish this, we need powerful techniques that capture applications' cache utilization and provide intuitive performance metrics.

    In this thesis we present three techniques for analyzing application performance, StatStack, StatCC and Cache Pirating. Our main focus is on providing memory hierarchy related performance metrics such as miss ratio, fetch ratio and bandwidth demand, but also execution rate. These techniques are based on profiling information, requiring both runtime data collection and post processing. For such techniques to be broadly applicable the data collection has to have minimal impact on the profiled application, allow profiling of unmodified binaries, and not depend on custom hardware and/or operating system extensions. Furthermore, the information provided has to be accurate and easy to interpret by programmers, the runtime environment and compilers.

    StatStack estimates an application's miss ratio curve, StatCC estimates the miss ratio of co-running application sharing the last-level cache and Cache Pirating measures any desired performance metric available through hardware performance counters as a function of cache size. We have experimentally shown that our methods are both efficient and accurate. The runtime information required by StatStack and StatCC can be collected with an average runtime overhead of 40%. The Cache Pirating method measures the desired performance metrics with an average runtime overhead of 5%.

  • 32.
    Eklöv, David
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Profiling Methods for Memory Centric Software Performance Analysis2012Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    To reduce latency and increase bandwidth to memory, modern microprocessors are often designed with deep memory hierarchies including several levels of caches. For such microprocessors, both the latency and the bandwidth to off-chip memory are typically about two orders of magnitude worse than the latency and bandwidth to the fastest on-chip cache. Consequently, the performance of many applications is largely determined by how well they utilize the caches and bandwidths in the memory hierarchy. For such applications, there are two principal approaches to improve performance: optimize the memory hierarchy and optimize the software. In both cases, it is important to both qualitatively and quantitatively understand how the software utilizes and interacts with the resources (e.g., cache and bandwidths) in the memory hierarchy.

    This thesis presents several novel profiling methods for memory-centric software performance analysis. The goal of these profiling methods is to provide general, high-level, quantitative information describing how the profiled applications utilize the resources in the memory hierarchy, and thereby help software and hardware developers identify opportunities for memory related hardware and software optimizations. For such techniques to be broadly applicable the data collection should have minimal impact on the profiled application, while not being dependent on custom hardware and/or operating system extensions. Furthermore, the resulting profiling information should be accurate and easy to interpret.

    While several use cases are presented, the main focus of this thesis is the design and evaluation of the core profiling methods. These core profiling methods measure and/or estimate how high-level performance metrics, such as miss-and fetch ratio; off-chip bandwidth demand; and execution rate are affected by the amount of resources the profiled applications receive. This thesis shows that such high-level profiling information can be accurately obtained with very little impact on the profiled applications and without requiring costly simulations or custom hardware support.

    List of papers
    1. StatStack: Efficient modeling of LRU caches
    Open this publication in new window or tab >>StatStack: Efficient modeling of LRU caches
    2010 (English)In: Proc. International Symposium on Performance Analysis of Systems and Software: ISPASS 2010, Piscataway, NJ: IEEE , 2010, p. 55-65Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Piscataway, NJ: IEEE, 2010
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-136247 (URN)10.1109/ISPASS.2010.5452069 (DOI)978-1-4244-6023-6 (ISBN)
    Projects
    Coder-mpUPMARC
    Available from: 2010-04-19 Created: 2010-12-10 Last updated: 2018-01-12Bibliographically approved
    2. Fast modeling of shared caches in multicore systems
    Open this publication in new window or tab >>Fast modeling of shared caches in multicore systems
    2011 (English)In: Proc. 6th International Conference on High Performance and Embedded Architectures and Compilers, New York: ACM Press , 2011, p. 147-157Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    New York: ACM Press, 2011
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-146757 (URN)10.1145/1944862.1944885 (DOI)978-1-4503-0241-8 (ISBN)
    Projects
    Coder-mpUPMARC
    Available from: 2011-02-20 Created: 2011-02-20 Last updated: 2018-01-12Bibliographically approved
    3. Cache Pirating: Measuring the Curse of the Shared Cache
    Open this publication in new window or tab >>Cache Pirating: Measuring the Curse of the Shared Cache
    2011 (English)In: Proc. 40th International Conference on Parallel Processing, IEEE Computer Society, 2011, p. 165-175Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    IEEE Computer Society, 2011
    National Category
    Computer Engineering
    Identifiers
    urn:nbn:se:uu:diva-181254 (URN)10.1109/ICPP.2011.15 (DOI)978-1-4577-1336-1 (ISBN)
    Conference
    ICPP 2011
    Projects
    UPMARCCoDeR-MP
    Available from: 2011-10-17 Created: 2012-09-20 Last updated: 2018-01-12Bibliographically approved
    4. Quantitative Characterization of Memory Contention
    Open this publication in new window or tab >>Quantitative Characterization of Memory Contention
    2012 (English)Report (Other academic)
    Abstract [en]

    On multicore processors, co-executing applications compete for shared resources, such as cache capacity and memory bandwidth. This leads to suboptimal resource allocation and can cause substantial performance loss, which makes it important to effectively manage these shared resources. This, however, requires insights into how the applications are impacted by such resource sharing.

    While there are several methods to analyze the performance impact of cache contention, less attention has been paid to general, quantitative methods for analyzing the impact of contention for memory bandwidth. To this end we introduce the Bandwidth Bandit, a general, quantitative, profiling method for analyzing the performance impact of contention for memory bandwidth on multicore machines.

    The profiling data captured by the Bandwidth Bandit is presented in a it bandwidth graph. This graph accurately captures the measured application's performance as a function of its available memory bandwidth, and enables us to determine how much the application suffers when its available bandwidth is reduced. To demonstrate the value of this data, we present a case study in which we use the bandwidth graph to analyze the performance impact of memory contention when co-running multiple instances of single threaded application.

    Place, publisher, year, edition, pages
    Uppsala: Uppsala universitet, 2012. p. 10
    Series
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2012-029
    National Category
    Computer Systems
    Research subject
    Computer Systems Sciences
    Identifiers
    urn:nbn:se:uu:diva-182445 (URN)
    Available from: 2013-03-28 Created: 2012-10-10 Last updated: 2013-03-28Bibliographically approved
    5. A Profiling Method for Analyzing Scalability Bottlenecks on Multicores
    Open this publication in new window or tab >>A Profiling Method for Analyzing Scalability Bottlenecks on Multicores
    2012 (English)Report (Other academic)
    Publisher
    p. 12
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:uu:diva-182453 (URN)
    Available from: 2012-10-10 Created: 2012-10-10 Last updated: 2018-06-28Bibliographically approved
  • 33.
    El Shobaki, Mohammed
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    On-chip monitoring for non-intrusive hardware/software observability2004Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The increased complexity in today's state-of-the-art computer systems make them hard to analyse, test, and debug. Moreover, the advances in hardware technology give system designers enormous possibilities to explore hardware as a means to implement performance demanding functionality. We see examples of this trend in novel microprocessors, and Systems-on-Chip, that comprise reconfigurable logic allowing for hardware/software co-design. To succeed in developing computer systems based on these premises, it is paramount to have efficient design tools and methods.

    An important aspect in the development process is observability, i.e., the ability to observe the system's behaviour at various levels of detail. These observations are required for many applications: when looking for design errors, during debugging, during performance assessments and fine-tuning of algorithms, for extraction of design data, and a lot more. In real-time systems, and computers that allow for concurrent process execution, the observability must be obtained without compromising the system's functional and timing behaviour.

    In this thesis we propose a monitoring system that can be used for nonintrusive run-time observations of real-time and concurrent computer systems. The monitoring system, designated Multipurpose/Multiprocessor Application Monitor (MAMon), is based on a hardware probe unit (IPU) which is integrated with the observed system s hardware. The IPU collects process-level events from a hardware Real-Time Kernel (RTK), without perturbing the system, and transfers the events to an external computer for analysis, debugging, and visualisation. Moreover, the MAMon concept also features hybrid monitoring for collection of more fine-grained information, such as program instructions and data flows.

    We describe MAMon s architecture, the implementation of two hardware prototypes, and validation of the prototypes in different case-studies. The main conclusion is that process level events can be traced non-intrusively by integrating the IPU with a hardware RTK. Furthermore, the IPU's small footprint makes it attractive for SoC designs, as it provides increased system observability at a low hardware cost.

  • 34.
    Engblom, Jakob
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Processor Pipelines and Static Worst-Case Execution Time Analysis2002Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Worst-Case Execution Time (WCET) estimates for programs are necessary when building real-time systems. They are used to ensure timely responses from interrupts, to guarantee the throughput of cyclic tasks, as input to scheduling and schedule analysis algorithms, and in many other circumstances. Traditionally, such estimates have been obtained either by measurements or labor-intensive manual analysis, which is both time consuming and error-prone. Static worst-case execution time analysis is a family of techniques that promise to quickly provide safe execution time estimates for real-time programs, simultaneously increasing system quality and decreasing the development cost. This thesis presents several contributions to the state-of-the-art in WCET analysis.

    We present an overall architecture for WCET analysis tools that provides a framework for implementing modules. Within the stable interfaces provided, modules can be independently replaced, making it easy to customize a tool for a particular target and perform performance-precision trade-offs.

    We have developed concrete techniques for analyzing and representing the timing behavior of programs running on pipelined processors. The representation and analysis is more powerful than previous approaches in that pipeline timing effects across more than pairs of instructions can be handled, and in that no assumptions are made about the program structure. The analysis algorithm relies on a trace-driven processor simulator instead of a special-purpose processor model. This allows us to use existing simulators to adapt the analysis to a new target platform, reducing the retargeting effort.

    We have defined a formal mathematical model of processor pipelines, which we use to investigate the properties of pipelines and WCET analysis. We prove several interesting properties of processors with in-order issue, such as the freedom from timing anomalies and the fundamental safety of WCET analysis for certain classes of pipelines. We have also constructed a number of examples that demonstrate that tight and safe WCET analysis for pipelined processors might not be as easy as once believed.

    Considering the link between the analysis methods and the real world, we discuss how to build accurate software models of processor hardware, and the conditions under which accuracy is achievable.

  • 35.
    Eriksson, Emil
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    System collaboration and information sharing using DDS technology2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Just as the Internet of Things is set to change how devices are being used and connected in society in general, the Industrial Internet of Things will change the industries. In an industrial production line there are often many heterogeneous devices, and the requirements on the real-time properties of the communication between them are often strict. Creating a communication solution for the different devices, that also meet such requirements, is difficult. The traditional way for industrial devices to communicate is directly with each other or via a central point, but this communication solution is inflexible and difficult to scale up.

    One possible way to make communication and information sharing between devices easier is to use a dedicated middleware to handle the communication. One middleware standard is the Data Distribution Service (DDS) defined by the Object Management Group. In this thesis a DDS middleware from a specific vendor (vendor name is removed due to company confidentiality) is implemented and evaluated.

    The middleware is evaluated based on (1) an implementation in a prototype which shows how the middleware performs in a real-life industrial context, and (2) a simulation that showcases the potential of the technology.

    The DDS middleware was shown to function with a specific set of existing industrial hardware and software. The real-time properties of the communication system were studied and found to be around 3.5 times slower, when using the prototype setup, than those of the replaced communication solution. However, the round trip latency was still only 2 ms on average and 4.1 ms maximum when using the preferred settings.

    The simulation showed that there is potential for the DDS technology to be used in more advanced scenarios and that it should be investigated further.

  • 36.
    Eriksson, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Detailed simulation of heterogeneous wireless sensor networks2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Wireless sensor networks consist of many small nodes. Each node has a small microprocessor, a radio chip, some sensors, and is usually battery powered which limits network lifetime. Applications of wireless sensor networks range from environmental monitoring and health-care to industrial automation and military surveillance.

    Since the nodes are battery powered and communication consumes more than computation much of the research focuses on power efficient communication. One of the problems is however to measure the power consumption and communication quality of the different solutions.

    Simulation of sensor networks can greatly increase development speed and also be used for evaluating power consumption as well as communication quality. Simulation experiments typically give easier access to fine grained results than corresponding real-world experiments. The problem with simulators is that it is hard to show that a simulation experiment corresponds well with a similar real-world experiment.

    This thesis studies how detailed simulation of wireless sensor networks can be validated for accuracy and also shows several important uses of detailed simulation such as power consumption profiling and interoperability testing. Both of them represent important topics in today's wireless sensor network research and development.

    The results of the thesis are the simulation platform COOJA/MSPSim and that we show that MAC-protocol experiments performed in our simulator COOJA/MSPSim correspond well with experiments performed in our testbed. We also show that using COOJA/MSPSim any software running in the simulation can be power profiled.

  • 37.
    Ermedahl, Andreas
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    A Modular Tool Architecture for Worst-Case Execution Time Analysis2003Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Estimations of the Worst-Case Execution Time (WCET) are required in providing guarantees for timing of programs used in computer controlled products and other real-time computer systems. To derive program WCET estimates, both the properties of the software and the hardware must be considered. The traditional method to obtain WCET estimates is to test the system and measure the execution time. This is labour-intensive and error-prone work, which unfortunately cannot guarantee that the worst case is actually found. Static WCET analyses, on the other hand, are capable of generating safe WCET estimates without actually running the program. Such analyses use models of program flow and hardware timing to generate WCET estimates.

    This thesis includes several contributions to the state-of-the-art in static WCET analysis:

    (1) A tool architecture for static WCET analysis, which divides the WCET analysis into several steps, each with well-defined interfaces. This allows independent replacement of the

    modules implementing the different steps, which makes it easy to customize a WCET tool for particular target hardware and analysis needs.

    (2) A representation for the possible executions of a program. Compared to previous approaches, our representation extends the type of program flow information possible to express and handle in WCET analysis.

    (3) A calculation method which explicitly extracts a longest program execution path. The method is more efficient than previously presented path-based methods, with a computational complexity close to linear in the size of the program.

    (4) A calculation method using integer linear programming or constraint programming techniques for calculating the WCET estimate. The method extends the power of such calculation methods to handle new types of flow and timing information.

    (5) A calculation method that first uses flow information to divide the program into smaller parts, then calculates individual WCET estimates for these parts, and finally combines these into an overall program WCET. This novel approach avoids potential complexity problems, while still providing high precision WCET estimates.

    We have additionally implemented a prototype WCET analysis tool based on the proposed architecture. This tool is used for extensive evaluation of the precision and performance of our proposed methods. The results indicate that it is possible to perform WCET analysis in a modular fashion, and that this analysis produces high quality WCET estimates.

  • 38.
    Feeney, Laura Marie
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Gunningberg, Per
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Avoiding an IoT  "Tragedy of the Commons"2018Conference paper (Refereed)
    Abstract [en]

    The large number and wide diversity of IoT networks operating in unlicensed spectrum will create a complex and challenging interference environment.  To avoid a "tragedy of the commons", networks may need to more explicitly coordinate their use of the shared channel.

  • 39.
    Fersman, Elena
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    A Generic Approach to Schedulability Analysis of Real-Time Systems2003Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis presents a framework for design, analysis, and implementation of embedded systems. We adopt a model of timed automata extended with asynchronous processes i.e. tasks triggered by events. A task is an executable program characterized by its worst-case execution time and deadline, and possibly other parameters such as priorities etc. for scheduling. The main idea is to associate each location of an automaton with a task (or a set of tasks). A transition leading to a location denotes an event triggering the tasks and the clock constraint on the transition specifies the possible arrival times of the event. This yields a model for real-time systems expressive enough to describe concurrency and synchronization, and tasks with (or without) combinations of timing, precedence and resource constraints, which may be periodic, sporadic, preemptive and (or) non-preemptive. We believe that the model may serve as a bridge between scheduling theory and automata-theoretic approaches to system modelling and analysis.

    Our main result is that the schedulability checking problem for this model is decidable. To our knowledge, this is the first general decidability result on dense-time models for real time scheduling without assuming that preemptions occur only at integer time points. The proof is based on a decidable class of updatable automata: timed automata with subtraction in which clocks may be updated by subtractions within a bounded zone. As the second contribution, we show that for fixed priority scheduling strategy, the schedulability checking problem can be solved by reachability analysis on standard timed automata using only two extra clocks in addition to the clocks used in the original model to describe task arrival times. The analysis can be done in a similar manner to response time analysis in classic Rate-Monotonic Scheduling. We believe that this is the optimal solution to the problem. The third contribution is an extension of the above results to deal with precedence and resource constraints. We present an operational semantics for the model, and show that the related schedulability analysis problem can be solved efficiently using the same techniques. Finaly, to demonstrate the applicability of the framework, we have modelled, analysed, and synthesised the control software for a production cell. The presented results have been implemented in the Times tool for automated schedulability analysis and code synthesis.

  • 40.
    Finne, Niclas
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Towards adaptive sensor networks2011Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Wireless sensor networks consist of many small embedded devices that are equipped with sensors and a wireless communication unit. These devices, or sensor nodes, are typically low cost, resource constrained and battery-powered. Sensor network applications include environmental monitoring, industrial condition monitoring, building surveillance, and intelligent homes.

    Sensor network applications today are often developed either using standard software components which enables simpler development but leads to far from optimal performance, or software customized for the specific application which complicates development, software updates, and software reuse.

    We suggest that logic is separated from configuration and other information, for instance, network statistics and estimated power consumption. Software components publish their configuration and other information using a generalized programming abstraction. Configuration policies are separate modules that react on changes and reconfigure the sensor node as needed. These configuration policies are responsible for coordinating the configuration between the components and optimize the sensor network towards the application objectives.

    One of our contributions is that we demonstrate the need for self-monitoring and self-configuration based on experiences from two deployed sensor networks. Our main contribution is that we devise a configuration architecture that solves the problem of cross-layer optimization for sensor network applications without tight coupling between components, thus enabling standard and easily replaceable components to be used. The configuration architecture makes it possible to reach the same level of performance as specialized cross-layer optimizations but without adding application-specific knowledge to the components.

  • 41.
    Forslin, Sara
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Automatiska GUI-test: En teoretisk och praktisk studie för hållbarare test2010Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    To automate software testing is an investment, and that makes it important to have good knowledge about automation before an implementation is made. In this master thesis a study about automated testing is carried out. The study also includes a closer look at the testing tool TestComplete. TestComplete is one out of many tools on the market that is developed for automated GUI testing and the goal is to get a picture of how well the tool can adjust to CC Systems own applications. Studies show that it is hard to write lasting test scripts. The automation must be implemented early in the development process and many protective measures have to be applied to the tests in order to protect  them. The report  also discusses a number of design requirements that is put on the tested application. These requirements are relatively easy to implement if they are known from the start of the development process. Two of the most important requirements are that the controls have to have specified names so the tests can use them and identify them and that the values that are important in the verification of the test have to be accessible and readable. CCSimTech is one of CC Systems own developed products and it is used in different hardware simulations. It is an important tool in manual testing at CC System today. Therefore it is important that also TestComplete can use it. To solve this, a library of functions adjusted for TestComplete has been created. The functions are developed so that they can be called directly from the test scripts in TestComplete in a user-friendly way. If the tests are able to call and use CCSimTech that will mean a way to further control the applications and make the test even more powerful

  • 42.
    Furunäs Åkesson, Johan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Interprocess communication utilising special purpose hardware2001Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Real-Time Systems are computer systems with constraints on the timing of actions. To ease the development and maintenance of application software, Real-time Systems often make use of a Real-Time Operating System (RTOS). Its main task is scheduling of application processes (tasks). Other functions can be interprocess communication, interrupt handling, memory management etc.

    Sometimes it is hard (or even impossible) to meet the time constraints specified for a real-time system, resulting in an incorrectly functioning application. A possible remedy is to redesign the system by upgrading the processor and/or remove functionality, etc. An alternative solution could be the use of a special purpose hardware accelerated RTOS. The aim of such an accelerator is to speedup RTOS functions that impose big overhead i.e. to reduce the kernel overhead by offloading the application processor. Accordingly, the processor gets more time for executing application software, and hopefully the time constraints can be met. The main drawback is the cost of extra hardware.

    This thesis presents results from implementing RTOS functions in hardware, especially interprocess communication (IPC) functions. The types of systems considered are uniprocessor and shared memory multiprocessor real-time systems.

    IPC is used in systems with co-operating processes. The operating systems on the market support a large variation of IPC mechanisms. We will here present and evaluate three different IPC implementations. The first is an extended message queue mechanism that is used in commercial robot control applications. The second is the signal mechanism in OSE, a commercial RTOS predominantly used in telecommunication control applications, and the third is the semaphore and message queue mechanisms supported by the leading commercial RTOS VxWorks. All the implementations are based on a pre-emptive priority-based hardware real-time kernel accelerator.

    We show that it is not optimal, practical or desirable to implement every kernel function into hardware, regarding systems in the scope of this thesis. However, an accelerator allows new functionality to be implemented. We illustrate this by implementing a message queue mechanism that supports priority inheritance for message arrival in hardware, which is too expensive to implement in software. Also, we show that substantial speedups are possible, and that a crucial mechanism in achieving speedup is the accelerator-processor interconnect. We further note that application speedups are possible, even in cases with an IPC-mechanism slow-down. The main reasons for this is that the accelerator can off-load the processor by handling the RTOS timing mechanism (clock-ticks), reducing the RTOS code to be executed on the processor, and handling interrupts.

  • 43.
    Furuskog, Martin
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Wemyss, Stuart
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Cross-platform development of smartphone applications: An evaluation of React Native2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    During the last ten years the market for smartphones has grown drastically. Because of the current state of the market with different operating systems many smartphone applications need to be developed for several platforms. With this thesis, the goal was ultimately to contribute to the understanding of cross-platform development as a way of developing smartphone applications. React Native has been evaluated as a framework with which development is done for both Android and iOS using the same code. To study React Native as a framework, a smartphone application for Android and iOS was developed at an Uppsala based IT-company with expertise in web services, smartphone applications, and online gaming. Furthermore, performance tests and user tests were conducted in which React Native was compared to native applications and applications developed using Xamarin (similar cross-platform development framework owned by Microsoft). It was evident that using the same code for both Android and iOS was time saving. However, the performance tests results showed that applications developed with React Native did not perform as well as the native and Xamarin versions. Leading to the conclusion that choice of framework when developing cross-platform applications need to take into consideration performance, development time, and programming language preference.

  • 44.
    Gold, Richard
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    An Indirection Architecture for the Internet2005Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    We present an indirection architecture for the Internet called SelNet. SelNet provides a uniform indirection mechanism for controlling the route that packets take through the network and which functions are invoked to process these packets. In the current Internet, at least for the majority of users, there is only one way that a packet can go and that is to the default route. Whilst this is sufficient for many applications, numerous applications have arisen which require alternative routes or processing to be present not only at the application-layer of the protocol stack, but at the network-layer itself. Solutions to such scenarios attempt to place an indirection point between the communicating end-systems either with a middlebox (such as a proxy) or by altering one or more of the Internet's naming systems. However these approaches lead to an application-specific network, which is against the Internet's design goals. We argue for a uniform approach to indirection instead of building multiple, partially overlapping structures as is the current trend. SelNet differs from existing indirection approaches in that it is function-orientated, rather than node-orientated and that it provides an explicit, controllable resolution mechanism for resolving host names and services. The motivation behind our approach is to create efficient indirection structures for supporting new applications which have indirection requirements. We present a detailed design and specification of SelNet. We then go on to describe implementation work with the LUNAR ad-hoc routing protocol and the Janus middleware for accessing sensor networks systems. The purpose of this implementation work is to demonstrate the feasibility of SelNet and its ability to reach its goals.

    List of papers
    1. Network Pointers
    Open this publication in new window or tab >>Network Pointers
    2003 In: ACM SIGCOMM Computer Communication Review, ISSN 0146-4833, Vol. 33, no 1, p. 23 - 28Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-93833 (URN)
    Available from: 2005-12-02 Created: 2005-12-02Bibliographically approved
    2. A virtualized link layer with support for indirection
    Open this publication in new window or tab >>A virtualized link layer with support for indirection
    2004 (English)In: ACM SIGCOMM Workshop on Future Directions in Network Architecture (FDNA'04) / [ed] Keshav, Fall, Portland: ACM New York, NY, USA , 2004, p. 28-34Conference paper, Published paper (Refereed)
    Abstract [en]

    The current Internet today hosts several extensions for indirection like Mobile IP, NAT, proxies, route selection and various network overlays. At the same time, user-controlled indirection mechanisms foreseen in the Internet architecture (e.g., loose source routing) cannot be used to implement these extensions. This is a consequence of the Internet's indirection semantics not being rich enough at some places and too rich at others. In order to achieve a more uniform handling of indirection we propose SelNet, a network architecture that is based on a virtualized link layer with explicit indirection support. Indirection in this context refers to user-controlled steering of packet flows through the network. We discuss the architectural implications of such a scheme and report on implementation progress.

    Place, publisher, year, edition, pages
    Portland: ACM New York, NY, USA, 2004
    Keywords
    networking, routing, indirection, architectures, implementation
    National Category
    Engineering and Technology Engineering and Technology
    Research subject
    Computer Science with specialization in Computer Communication
    Identifiers
    urn:nbn:se:uu:diva-93834 (URN)
    Conference
    ACM SIGCOMM
    Projects
    CONNECTED
    Available from: 2005-12-02 Created: 2005-12-02 Last updated: 2011-03-03Bibliographically approved
    3. Lightweight Underlay Network Adhoc Routing Protocol and Implementation
    Open this publication in new window or tab >>Lightweight Underlay Network Adhoc Routing Protocol and Implementation
    2004 In: Next Generation Teletraffic and Wired/Wireless Advanced Networking (NEW2AN'04)Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-93835 (URN)
    Available from: 2005-12-02 Created: 2005-12-02Bibliographically approved
    4. Janus: an architecture for flexible access to sensor networks
    Open this publication in new window or tab >>Janus: an architecture for flexible access to sensor networks
    Show others...
    2005 In: Proceedings of the 1st ACM workshop on Dynamic interconnection of networks, ISSN 1-59593-144-9, p. 48 - 52Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-93836 (URN)
    Available from: 2005-12-02 Created: 2005-12-02Bibliographically approved
    5. Janus: An Active Middleware for Accessing Sensor Networks
    Open this publication in new window or tab >>Janus: An Active Middleware for Accessing Sensor Networks
    Article in journal (Refereed) In press
    Identifiers
    urn:nbn:se:uu:diva-93837 (URN)
    Available from: 2005-12-02 Created: 2005-12-02 Last updated: 2010-01-14Bibliographically approved
  • 45.
    Grinchtein, Olga
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.