uu.seUppsala universitets publikasjoner
Endre søk
Begrens søket
123 1 - 50 of 135
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Alves, Ricardo
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorarkitektur och datorkommunikation.
    Leveraging Existing Microarchitectural Structures to Improve First-Level Caching Efficiency2019Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Low-latency data access is essential for performance. To achieve this, processors use fast first-level caches combined with out-of-order execution, to decrease and hide memory access latency respectively. While these approaches are effective for performance, they cost significant energy, leading to the development of many techniques that require designers to trade-off performance and efficiency.

    Way-prediction and filter caches are two of the most common strategies for improving first-level cache energy efficiency while still minimizing latency. They both have compromises as way-prediction trades off some latency for better energy efficiency, while filter caches trade off some energy efficiency for lower latency. However, these strategies are not mutually exclusive. By borrowing elements from both, and taking into account SRAM memory layout limitations, we proposed a novel MRU-L0 cache that mitigates many of their shortcomings while preserving their benefits. Moreover, while first-level caches are tightly integrated into the cpu pipeline, existing work on these techniques largely ignores the impact they have on instruction scheduling. We show that the variable hit latency introduced by way-misspredictions causes instruction replays of load dependent instruction chains, which hurts performance and efficiency. We study this effect and propose a variable latency cache-hit instruction scheduler, that identifies potential misschedulings, reduces instruction replays, reduces negative performance impact, and further improves cache energy efficiency.

    Modern pipelines also employ sophisticated execution strategies to hide memory latency and improve performance. While their primary use is for performance and correctness, they require intermediate storage that can be used as a cache as well. In this work we demonstrate how the store-buffer, paired with the memory dependency predictor, can be used to efficiently cache dirty data; and how the physical register file, paired with a value predictor, can be used to efficiently cache clean data. These strategies not only improve both performance and energy, but do so with no additional storage and minimal additional complexity, since they recycle existing cpu structures to detect reuse, memory ordering violations, and misspeculations.

    Delarbeid
    1. Addressing energy challenges in filter caches
    Åpne denne publikasjonen i ny fane eller vindu >>Addressing energy challenges in filter caches
    2017 (engelsk)Inngår i: Proc. 29th International Symposium on Computer Architecture and High Performance Computing, IEEE Computer Society, 2017, s. 49-56Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    Filter caches and way-predictors are common approaches to improve the efficiency and/or performance of first-level caches. Filter caches use a small L0 to provide more efficient and faster access to a small subset of the data, and work well for programs with high locality. Way-predictors improve efficiency by accessing only the way predicted, which alleviates the need to read all ways in parallel without increasing latency, but hurts performance due to mispredictions.In this work we examine how SRAM layout constraints (h-trees and data mapping inside the cache) affect way-predictors and filter caches. We show that accessing the smaller L0 array can be significantly more energy efficient than attempting to read fewer ways from a larger L1 cache; and that the main source of energy inefficiency in filter caches comes from L0 and L1 misses. We propose a filter cache optimization that shares the tag array between the L0 and the L1, which incurs the overhead of reading the larger tag array on every access, but in return allows us to directly access the correct L1 way on each L0 miss. This optimization does not add any extra latency and counter-intuitively, improves the filter caches overall energy efficiency beyond that of the way-predictor.By combining the low power benefits of a physically smaller L0 with the reduction in miss energy by reading L1 tags upfront in parallel with L0 data, we show that the optimized filter cache reduces the dynamic cache energy compared to a traditional filter cache by 26% while providing the same performance advantage. Compared to a way-predictor, the optimized cache improves performance by 6% and energy by 2%.

    sted, utgiver, år, opplag, sider
    IEEE Computer Society, 2017
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-334221 (URN)10.1109/SBAC-PAD.2017.14 (DOI)000426895600007 ()978-1-5090-1233-6 (ISBN)
    Konferanse
    29th International Symposium on Computer Architecture and High Performance Computing SBAC-PAD, 2017, October 17–20, Campinas, Brazil.
    Tilgjengelig fra: 2017-11-09 Laget: 2017-11-21 Sist oppdatert: 2019-05-22bibliografisk kontrollert
    2. Dynamically Disabling Way-prediction to Reduce Instruction Replay
    Åpne denne publikasjonen i ny fane eller vindu >>Dynamically Disabling Way-prediction to Reduce Instruction Replay
    2018 (engelsk)Inngår i: 2018 IEEE 36th International Conference on Computer Design (ICCD), IEEE, 2018, s. 140-143Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    Way-predictors have long been used to reduce dynamic cache energy without the performance loss of serial caches. However, they produce variable-latency hits, as incorrect predictions increase load-to-use latency. While the performance impact of these extra cycles has been well-studied, the need to replay subsequent instructions in the pipeline due to the load latency increase has been ignored. In this work we show that way-predictors pay a significant performance penalty beyond previously studied effects due to instruction replays caused by mispredictions. To address this, we propose a solution that learns the confidence of the way prediction and dynamically disables it when it is likely to mispredict and cause replays. This allows us to reduce cache latency (when we can trust the way-prediction) while still avoiding the need to replay instructions in the pipeline (by avoiding way-mispredictions). Standard way-predictors degrade IPC by 6.9% vs. a parallel cache due to 10% of the instructions being replayed (worst case 42.3%). While our solution decreases way-prediction accuracy by turning off the way-predictor in some cases when it would have been correct, it delivers higher performance than a standard way-predictor. Our confidence-based way-predictor degrades IPC by only 4.4% by replaying just 5.6% of the instructions (worse case 16.3%). This reduces the way-predictor cache energy overhead compared to serial access cache, from 8.5% to 3.7% on average and on the worst case, from 33.8% to 9.5%.

    sted, utgiver, år, opplag, sider
    IEEE, 2018
    Serie
    Proceedings IEEE International Conference on Computer Design, ISSN 1063-6404, E-ISSN 2576-6996
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-361215 (URN)10.1109/ICCD.2018.00029 (DOI)000458293200018 ()978-1-5386-8477-1 (ISBN)
    Konferanse
    IEEE 36th International Conference on Computer Design (ICCD), October 7–10, 2018, Orlando, FL, USA
    Tilgjengelig fra: 2018-09-21 Laget: 2018-09-21 Sist oppdatert: 2019-05-22bibliografisk kontrollert
    3. Minimizing Replay under Way-Prediction
    Åpne denne publikasjonen i ny fane eller vindu >>Minimizing Replay under Way-Prediction
    2019 (engelsk)Rapport (Annet vitenskapelig)
    Abstract [en]

    Way-predictors are effective at reducing dynamic cache energy by reducing the number of ways accessed, but introduce additional latency for incorrect way-predictions. While previous work has studied the impact of the increased latency for incorrect way-predictions, we show that the latency variability has a far greater effect as it forces replay of in-flight instructions on an incorrect way-prediction. To address the problem, we propose a solution that learns the confidence of the way-prediction and dynamically disables it when it is likely to mispredict. We further improve this approach by biasing the confidence to reduce latency variability further at the cost of reduced way-predictions. Our results show that instruction replay in a way-predictor reduces IPC by 6.9% due to 10% of the instructions being replayed. Our confidence-based way-predictor degrades IPC by only 2.9% by replaying just 3.4% of the instructions, reducing way-predictor cache energy overhead (compared to serial access cache) from 8.5% to 1.9%.

    Serie
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2019-003
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-383596 (URN)
    Tilgjengelig fra: 2019-05-17 Laget: 2019-05-17 Sist oppdatert: 2019-07-03bibliografisk kontrollert
    4. Filter caching for free: The untapped potential of the store-buffer
    Åpne denne publikasjonen i ny fane eller vindu >>Filter caching for free: The untapped potential of the store-buffer
    2019 (engelsk)Inngår i: Proc. 46th International Symposium on Computer Architecture, New York: ACM Press, 2019, s. 436-448Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    Modern processors contain store-buffers to allow stores to retire under a miss, thus hiding store-miss latency. The store-buffer needs to be large (for performance) and searched on every load (for correctness), thereby making it a costly structure in both area and energy. Yet on every load, the store-buffer is probed in parallel with the L1 and TLB, with no concern for the store-buffer's intrinsic hit rate or whether a store-buffer hit can be predicted to save energy by disabling the L1 and TLB probes.

    In this work we cache data that have been written back to memory in a unified store-queue/buffer/cache, and predict hits to avoid L1/TLB probes and save energy. By dynamically adjusting the allocation of entries between the store-queue/buffer/cache, we can achieve nearly optimal reuse, without causing stalls. We are able to do this efficiently and cheaply by recognizing key properties of stores: free caching (since they must be written into the store-buffer for correctness we need no additional data movement), cheap coherence (since we only need to track state changes of the local, dirty data in the store-buffer), and free and accurate hit prediction (since the memory dependence predictor already does this for scheduling).

    As a result, we are able to increase the store-buffer hit rate and reduce store-buffer/TLB/L1 dynamic energy by 11.8% (up to 26.4%) on SPEC2006 without hurting performance (average IPC improvements of 1.5%, up to 4.7%).The cost for these improvements is a 0.2% increase in L1 cache capacity (1 bit per line) and one additional tail pointer in the store-buffer.

    sted, utgiver, år, opplag, sider
    New York: ACM Press, 2019
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-383473 (URN)10.1145/3307650.3322269 (DOI)978-1-4503-6669-4 (ISBN)
    Konferanse
    ISCA 2019, June 22–26, Phoenix, AZ
    Forskningsfinansiär
    Knut and Alice Wallenberg FoundationEU, Horizon 2020, 715283EU, Horizon 2020, 801051Swedish Foundation for Strategic Research , SM17-0064
    Tilgjengelig fra: 2019-06-22 Laget: 2019-05-16 Sist oppdatert: 2019-07-03bibliografisk kontrollert
    5. Efficient temporal and spatial load to load forwarding
    Åpne denne publikasjonen i ny fane eller vindu >>Efficient temporal and spatial load to load forwarding
    2020 (engelsk)Inngår i: Proc. 26th International Symposium on High-Performance and Computer Architecture, IEEE Computer Society, 2020Konferansepaper, Publicerat paper (Fagfellevurdert)
    sted, utgiver, år, opplag, sider
    IEEE Computer Society, 2020
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-383477 (URN)
    Konferanse
    HPCA 2020, February 22–26, San Diego, CA
    Merknad

    to appear

    Tilgjengelig fra: 2019-08-21 Laget: 2019-05-16 Sist oppdatert: 2019-08-21bibliografisk kontrollert
  • 2.
    Amnell, Tobias
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Code synthesis for timed automata2003Licentiatavhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    In this thesis, we study executable behaviours of timed models. The focus is on synthesis of executable code with predictable behaviours from high level abstract models. We assume that a timed system consists of two parts: the control software and the plant (i.e. the environment to be controlled). Both are modelled as timed automata extended with real time tasks. We consider the extended timed automata as design models.

    We present a compilation procedure to transform design models to executable code including a run-time scheduler (run time system) preserving the correctness and schedulability of the models. The compilation procedure has been implemented in a prototype C-code generator for the brickOS operating system included in the Times tool. We also present an animator, based on hybrid automata, to be used to describe a simulated environment (i.e. the plant) for timed systems. The tasks of the hybrid automata define differential equations and the animator uses a differential equations solver to calculate step-wise approximations of real valued variables. The animated objects, described as hybrid automata, are compiled by the Times tool into executable code using a similar procedure as for controller software.

    To demonstrate the applicability of timed automata with tasks as a design language we have developed the control software for a production cell. The production cell is built in LEGO and is controlled by a Hitachi H8 based LEGO-Mindstorms control brick. The control software has been analysed (using the Times tool) for schedulability and other safety properties. Using results from the analysis we were able to avoid generating code for parts of the design that could never be reached, and could also limit the amount of memory allocated for the task queue.

  • 3.
    Backeman, Peter
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    New techniques for handling quantifiers in Boolean and first-order logic2016Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The automation of reasoning has been an aim of research for a long time. Already in 17th century, the famous mathematician Leibniz invented a mechanical calculator capable of performing all four basic arithmetic operators. Although automatic reasoning can be done in different fields, many of the procedures for automated reasoning handles formulas of first-order logic. Examples of use cases includes hardware verification, program analysis and knowledge representation.

    One of the fundamental challenges in first-order logic is handling quantifiers and the equality predicate. On the one hand, SMT-solvers (Satisfiability Modulo Theories) are quite efficient at dealing with theory reasoning, on the other hand they have limited support for complete and efficient reasoning with quantifiers. Sequent, tableau and resolution calculi are methods which are used to construct proofs for first-order formulas and can use more efficient techniques to handle quantifiers. Unfortunately, in contrast to SMT, handling theories is more difficult.

    In this thesis we investigate methods to handle quantifiers by restricting search spaces to finite domains which can be explored in a systematic manner. We present this approach in two different contexts.

    First we introduce a function synthesis based on template-based quantifier elimination, which is applied to gene interaction computation. The function synthesis is shown to be capable of generating smaller representations of solutions than previous solvers, and by restricting the constructed functions to certain forms we can produce formulas which can more easily be interpreted by a biologist.

    Secondly we introduce the concept of Bounded Rigid E-Unification (BREU), a finite form of unification that can be used to define a complete and sound sequent calculus for first-order logic with equality. We show how to solve this bounded form of unification in an efficient manner, yielding a first-order theorem prover utilizing BREU that is competitive with other state-of-the-art tableau theorem provers.

    Delarbeid
    1. Theorem proving with bounded rigid E-unification
    Åpne denne publikasjonen i ny fane eller vindu >>Theorem proving with bounded rigid E-unification
    2015 (engelsk)Inngår i: Automated Deduction – CADE-25, Springer, 2015, s. 572-587Konferansepaper, Publicerat paper (Fagfellevurdert)
    sted, utgiver, år, opplag, sider
    Springer, 2015
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 9195
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-268734 (URN)10.1007/978-3-319-21401-6_39 (DOI)000363947500039 ()978-3-319-21400-9 (ISBN)
    Konferanse
    25th International Conference on Automated Deduction; August 1–7, 2015, Berlin, Germany
    Forskningsfinansiär
    Swedish Research Council, 2014-5484
    Tilgjengelig fra: 2015-07-25 Laget: 2015-12-09 Sist oppdatert: 2018-01-10bibliografisk kontrollert
    2. Efficient algorithms for bounded rigid E-unification
    Åpne denne publikasjonen i ny fane eller vindu >>Efficient algorithms for bounded rigid E-unification
    2015 (engelsk)Inngår i: Automated Reasoning with Analytic Tableaux and Related Methods, Springer, 2015, s. 70-85Konferansepaper, Publicerat paper (Fagfellevurdert)
    sted, utgiver, år, opplag, sider
    Springer, 2015
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 9323
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-268735 (URN)10.1007/978-3-319-24312-2_6 (DOI)000366125200009 ()978-3-319-24311-5 (ISBN)
    Konferanse
    TABLEAUX 2015, September 21–24, Wroclaw, Poland
    Forskningsfinansiär
    Swedish Research Council, 2014-5484
    Tilgjengelig fra: 2015-11-08 Laget: 2015-12-09 Sist oppdatert: 2018-01-10bibliografisk kontrollert
    3. Algebraic polynomial-based synthesis for abstract Boolean network analysis
    Åpne denne publikasjonen i ny fane eller vindu >>Algebraic polynomial-based synthesis for abstract Boolean network analysis
    2016 (engelsk)Inngår i: Satisfiability Modulo Theories: SMT 2016, RWTH Aachen University , 2016, s. 41-50Konferansepaper, Publicerat paper (Fagfellevurdert)
    sted, utgiver, år, opplag, sider
    RWTH Aachen University, 2016
    Serie
    CEUR Workshop Proceedings, ISSN 1613-0073 ; 1617
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-311587 (URN)
    Konferanse
    14th International Workshop on Satisfiability Modulo Theories
    Tilgjengelig fra: 2016-07-02 Laget: 2016-12-29 Sist oppdatert: 2018-01-13bibliografisk kontrollert
  • 4.
    Ben Henda, Noomene
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Infinite-state Stochastic and Parameterized Systems2008Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    A major current challenge consists in extending formal methods in order to handle infinite-state systems. Infiniteness stems from the fact that the system operates on unbounded data structure such as stacks, queues, clocks, integers; as well as parameterization.

    Systems with unbounded data structure are natural models for reasoning about communication protocols, concurrent programs, real-time systems, etc. While parameterized systems are more suitable if the system consists of an arbitrary number of identical processes which is the case for cache coherence protocols, distributed algorithms and so forth.

    In this thesis, we consider model checking problems for certain fundamental classes of probabilistic infinite-state systems, as well as the verification of safety properties in parameterized systems. First, we consider probabilistic systems with unbounded data structures. In particular, we study probabilistic extensions of Lossy Channel Systems (PLCS), Vector addition Systems with States (PVASS) and Noisy Turing Machine (PNTM). We show how we can describe the semantics of such models by infinite-state Markov chains; and then define certain abstract properties, which allow model checking several qualitative and quantitative problems.

    Then, we consider parameterized systems and provide a method which allows checking safety for several classes that differ in the topologies (linear or tree) and the semantics (atomic or non-atomic). The method is based on deriving an over-approximation which allows the use of a symbolic backward reachability scheme. For each class, the over-approximation we define guarantees monotonicity of the induced approximate transition system with respect to an appropriate order. This property is convenient in the sense that it preserves upward closedness when computing sets of predecessors.

  • 5.
    Bengtson, Jesper
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Formalising process calculi2010Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    As the complexity of programs increase, so does the complexity of the models required to reason about them. Process calculi were introduced in the early 1980s and have since then been used to model communication protocols of varying size and scope. Whereas modeling sophisticated  protocols in simple process algebras like CCS or the pi-calculus is doable, expressing the models required is often gruesome and error prone. To combat this, more advanced process calculi were introduced, which significantly reduce the complexity of the models. However, this simplicity comes at a price -- the theories of the calculi themselves instead become gruesome and error prone, establishing their mathematical and logical properties has turned out to be difficult. Many of the proposed calculi have later turned out to be inconsistent.

    The contribution of this thesis is twofold. First we provide methodologies to formalise the meta-theory of process calculi in an interactive theorem prover. These are used to formalise significant parts of the meta-theory of CCS and the pi-calculus in the theorem prover Isabelle, using Nominal Logic to allow for a smooth treatment of the binders. Second we introduce and formalise psi-calculi, a framework for process calculi incorporating several existing ones, including those we already formalised, and which is significantly simpler and substantially more expressive. Our methods scale well as complexity of the calculi increases.

    The formalised results for all calculi include congruence results for both strong and weak bisimilarities, in the case of the pi-calculus for both the early and the late operational semantics. For the finite pi-calculus, we also formalise the proof that the axiomatisation of strong late bisimilarity is sound and complete. We believe psi-calculi to be the state of the art of process calculi, and our formalisation to be the most extensive formalisation of process calculi ever done inside a theorem prover.

  • 6.
    Bengtsson, Johan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Clocks, DBMs and States in Timed Systems2002Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Today, computers are used to control various technical systems in our society. In many cases, time plays a crucial role in the operation of computers embedded in such systems. This thesis is about techniques and tools for the analysis of timing behaviours of computer systems. Its main contributions are in the development and implementation of UPPAAL, a tool designed to automate the analysis process of systems modelled as timed automata.

    As the first contribution, we present a software package for timing constraints represented as Difference Bound Matrices. We describe in details, all data-structures and operations for DBMs needed in state-space exploration of timed automata, as well as techniques for efficient implementation. In particular, we have developed two normalisation algorithms to guarantee termination of reachability analysis for timed automata containing constraints on clock differences, that transform DBMs according to not only maximal constants of clocks as in algorithms published in the literature, but also difference constraints appearing in the automata. The second contribution of this thesis is a collection of low level optimisations on the internal data-structures and algorithms of UPPAAL to minimise memory and time consumption. We present compression techniques to allow the state-space of a system to be efficiently stored and manipulated in main memory. We also study super-trace and hash-compaction methods for timed automata to deal with system-models for which the size of available memory is too small to store the explored state-space. Our experiments show that these techniques have greatly improved the performance of UPPAAL. The third contribution is in partial-order reduction techniques for timed-systems. A major problem in automatic verification is the large number of redundant states and transitions introduced by modelling concurrent events as interleaved transitions. We propose a notion of committed locations for timed automata. Committed locations are annotations that can be used for not only modelling of intermediate states within atomic transitions, but also guiding the model checker to ignore unnecessary interleavings in state-space exploration. The notion of committed locations has been generalised to give a local-time semantics for networks of timed automata, which allows for the application of existing partial order reduction techniques to timed systems.

  • 7.
    Bengtsson, Johan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Efficient symbolic state exploration of timed systems: Theory and implementation2001Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Timing aspects are important for the correctness of safety-critical systems. It is crucial that these aspects are carefully analysed in designing such systems. UPPAAL is a tool designed to automate the analysis process. In UPPAAL, a system under construction is described as a network of timed automata and the desired properties of the system can be specified using a query language. Then UPPAAL can be used to explore the state space of the system description to search for states violating (or satisfying) the properties. If such states are found, the tool provides diagnostic information, in form of executions leading to the states, to help the desginers, for example, to locate bugs in the design.

    The major problem for UPPAAL and other tools for timed systems to deal with industrial-size applications is the state space explosion. This thesis studies the sources of the problem and develops techniques for real-time model checkers, such as UPPAAL, to attack the problem. As contributions, we have developed the notion of committed locations to model atomicity and local-time semantics for timed systems to allow partial order reductions, and a number of implementation techniques to reduce time and space consumption in state space exploration. The techniques are studied and compared by case studies. Our experiments demonstrate significant improvements on the performance of UPPAAL.

  • 8.
    Berg, Erik
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Efficient and Flexible Characterization of Data Locality through Native Execution Sampling2005Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Data locality is central to modern computer designs. The widening gap between processor speed and memory latency has introduced the need for a deep hierarchy of caches. Thus, the performance of an application is to a large extent dependent on the amount of data locality the caches can exploit. Some data locality comes naturally from the way most programs are written and the way their data is allocated in the memory. Compilers further try to create data locality by loop transformations and optimized data layout. Different ways of writing a program and/or laying out its data may improve an application’s locality even more. However, it is far from obvious how such a locality optimization can be achieved, especially since the optimizing compiler may have left the optimization job half done. Thus, efficient tools are needed to guide the software developers on their quest for data locality.

    The main contribution of this dissertation is a sample-based novel method for analyzing the data locality of an application. Very sparse data is collected during a single execution of the studied application. The sparse sampling adds a minimum overhead to the execution time, which enables complex applications running realistic data sets to be studied. The architecturalindependent information collected during the execution is fed to a mathematical cache model for predicting the cache miss ratio. The sparsely-collected data can be used to characterize the application’s data locality in respect to almost any possible cache hierarchy, such as complicated multiprocessor memory systems with multilevel cache hierarchies. Any combination of cache size, cache line size and degree of sharing can be modeled. Each new modeled design point takes only a fraction of a second to evaluate, even though the application from which the sampled data was collected may have executed for hours. This makes the tool not just usable for software developers, but also for hardware developers who need to evaluate a huge memory-system design space.

    We also discuss different ways of presenting data-locality information to a programmer in an intuitive and easily interpreted way. Some of the locality metrics we introduce utilize the flexibility of our algorithm and its ability to vary different cache parameters for one run. The dissertation also presents several prototype implementations of tools for profiling the memory system.

    Delarbeid
    1. SIP: Performance Tuning through Source Code Interdependence
    Åpne denne publikasjonen i ny fane eller vindu >>SIP: Performance Tuning through Source Code Interdependence
    2002 Inngår i: Proceedings of the 8th International Euro-Par ConferenceArtikkel i tidsskrift (Fagfellevurdert) Published
    Identifikatorer
    urn:nbn:se:uu:diva-93582 (URN)
    Tilgjengelig fra: 2005-10-19 Laget: 2005-10-19bibliografisk kontrollert
    2. StatCache: A Probabilistic Approach to Efficient and Accurate Data Locality Analysis
    Åpne denne publikasjonen i ny fane eller vindu >>StatCache: A Probabilistic Approach to Efficient and Accurate Data Locality Analysis
    2004 Inngår i: Proceedings of the 2004 IEEE International Symposium on Performance Analysis of Systems and SoftwareArtikkel i tidsskrift (Fagfellevurdert) Published
    Identifikatorer
    urn:nbn:se:uu:diva-93583 (URN)
    Tilgjengelig fra: 2005-10-19 Laget: 2005-10-19bibliografisk kontrollert
    3. Fast Data-Locality Profiling of Native Execution
    Åpne denne publikasjonen i ny fane eller vindu >>Fast Data-Locality Profiling of Native Execution
    2005 (engelsk)Inngår i: ACM SIGMETRICS Performance Evaluation Review, ISSN 0163-5999, Vol. 33, nr 1, s. 169-180Artikkel i tidsskrift (Fagfellevurdert) Published
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-93584 (URN)10.1145/1071690.1064232 (DOI)
    Tilgjengelig fra: 2005-10-19 Laget: 2005-10-19 Sist oppdatert: 2018-01-13bibliografisk kontrollert
    4. A Statistical Multiprocessor Cache Model
    Åpne denne publikasjonen i ny fane eller vindu >>A Statistical Multiprocessor Cache Model
    2006 (engelsk)Inngår i: Proc. International Symposium on Performance Analysis of Systems and Software: ISPASS 2006, Piscataway, NJ: IEEE , 2006, s. 89-99Konferansepaper, Publicerat paper (Fagfellevurdert)
    sted, utgiver, år, opplag, sider
    Piscataway, NJ: IEEE, 2006
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-93585 (URN)10.1109/ISPASS.2006.1620793 (DOI)1-4244-0186-0 (ISBN)
    Tilgjengelig fra: 2005-10-19 Laget: 2005-10-19 Sist oppdatert: 2018-01-13bibliografisk kontrollert
  • 9.
    Berg, Erik
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Methods for run time analysis of data locality2003Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The growing gap between processor clock speed and DRAM access time puts new demands on software and development tools. Deep memory hierarchies and high cache miss penalties in present and emerging computer systems make execution time sensitive to data locality. Therefore, developers of performance-critical applications and optimizing compilers must be aware of data locality and maximize cache utilization to produce fast code. To aid the optimization process and help understanding data locality, we need methods to analyze programs and pinpoint poor cache utilization and possible optimization opportunities.

    Current methods for run-time analysis of data locality and cache behavior include functional cache simulation, often combined with set sampling or time sampling, other regularity metrics based on strides and data streams, and hardware monitoring. However, they all share the trade-off between run-time overhead, accuracy and explanatory power.

    This thesis presents methods to efficiently analyze data locality at run time based on cache modeling. It suggests source-interdependence profiling as a technique for examining the cache behavior of applications and locating source code statements and/or data structures that cause poor cache utilization. The thesis also introduces a novel statistical cache-modeling technique, StatCache. Rather than implementing a functional cache simulator, StatCache estimates the miss ratios of fully-associative caches using probability theory. A major advantage of the method is that the miss ratio estimates can be based on very sparse sampling. Further, a single run of an application is enough to estimate the miss ratio of caches of arbitrary sizes and line sizes and to study both spatial and temporal data locality.

  • 10.
    Berg, Therese
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Regular inference for reactive systems2006Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Models of reactive systems play a central role in many techniques for verification and analysis of reactive systems. Both a specification of the system and the abstract behavior of the system can be expressed in a formal model. Compliance with the functional parts in the specification can be controlled in different ways. Model checking techniques can be applied to a model of the system or directly to source code. In testing, model-based techniques can generate test suites from specification. A bottleneck in model-based techniques is however to construct a model of the system. This work concerns a technique that automatically constructs a model of a system without access to specification, code or internal structure. We assume that responses of the system to sequences of input can be observed. In this setting, so called regular inference techniques build a model of the system based on system responses to selected input sequences.

    There are three main contributions in this thesis. The first is a survey on the most well-known techniques for regular inference. The second is an analysis of Angluin's algorithm for regular inference on synthesized examples. On a particular type of examples, with prefix-closed languages, typically used to model reactive systems, the required number of input sequences grow approximately quadratically in the number of transitions of the system. However, using an optimization for systems with prefix-closed languages we were able to reduce the number of required input sequences with about 20%. The third contribution is a developed regular inference technique for systems with parameters. This technique aims to better handle entities of communications protocols where messages normally have many parameters of which few determine the subsequent behavior of the system. Experiments with our implementation of the technique confirm a reduction of the required number of input sequences, in comparison with Angluin's algorithm.

  • 11.
    Berglund, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Learning computer systems in a distributed project course: The what, why, how and where2005Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    Senior university students taking an internationally distributed project course in computer systems find themselves in a complex learning situation. To understand how they experience computer systems and act in their learning situation, the what, the why, the how and the where of their learning have been studied from the students’ perspective. The what aspect concerns the students’ understanding of concepts within computer systems: network protocols. The why aspect concerns the students’ objectives to learn computer systems. The how aspect concerns how the students go about learning. The where aspect concerns the students’ experience of their learning environment. These metaphorical entities are then synthesised to form a whole.

    The emphasis on the students’ experience of their learning motivates a phenomenographic research approach as the core of a study that is extended with elements of activity theory. The methodological framework that is developed from these research approaches enables the researcher to retain focus on learning, and specifically the learning of computer systems, throughout.

    By applying the framework, the complexity in the learning is unpacked and conclusions are drawn on the students’ learning of computer systems. The results are structural, qualitative, and empirically derived from interview data. They depict the students’ experience of their learning of computer systems in their experienced learning situation and highlight factors that facilitate learning.

    The results comprise sets of qualitatively different categories that describe how the students relate to their learning in their experienced learning environment. The sets of categories, grouped after the four components (what, why, how and where), are synthesised to describe the whole of the students’ experience of learning computer systems.

    This study advances the discussion about learning computer systems and demonstrates how theoretically anchored research contributes to teaching and learning in the field. Its multi-faceted, multi-disciplinary character invites further debate, and thus, advances the field.

  • 12.
    Berglund, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    On the understanding of computer network protocols2002Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    How students learn about network protocols is studied in a project-centred, internationally distributed, university course in computer systems taught jointly by two universities. Insights into students' understanding of basic concepts within computer networks are gained through an empirical phenomenographic research approach.

    The use of phenomenography as a research approach makes it possible to learn about computer science, as it is experienced by the students. The context in which the research is carried out and issues concerning by whom the context is experienced, are investigated and form a part of the methodological basis.

    Students' understanding of some protocols that are used within the project, as well as their experience of the general concept of network protocols are investigated, and different ways of experiencing the protocols are discerned. Some aspects that indicate good learning outcomes are identified, such as being capable of understanding a protocol in different ways and of making relevant choices between the ways it could be experienced according to the context in which it appears.

    Based on these results a discussion on learning and teaching is developed. It is argued that a variation in the context in which the protocol is experienced promotes good learning, since different ways of experiencing a protocol are useful with different tasks to hand. A student with a good understanding of network protocols can choose in a situationally relevant way between different ways of experiencing a protocol.

    Delarbeid
    1. How do students understand network protocols?: A phenomenographic study
    Åpne denne publikasjonen i ny fane eller vindu >>How do students understand network protocols?: A phenomenographic study
    2002 (engelsk)Rapport (Annet vitenskapelig)
    Serie
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2002-006
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-44859 (URN)
    Tilgjengelig fra: 2008-11-26 Laget: 2008-11-26 Sist oppdatert: 2018-01-11
    2. On context in phenomenographic research on understanding heat and temperate
    Åpne denne publikasjonen i ny fane eller vindu >>On context in phenomenographic research on understanding heat and temperate
    2002 (engelsk)Inngår i: EARLI, Bi-annual Symposium, Fribourg, Switzerland, 2002Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    Starting from an empirical study of lay adults' understanding of heatand temperature, we distinguish between different meanings of "context" inphenomenographic research. To confuse the variation in ways of experiencingthe context(s) of the study with the variation in ways of experiencing thephenomenon of study is to risk losing fundamental insights. We discuss contextas experienced and as interwoven with the experience of the phenomenon, andanalyse its significance in two dimensions: (1) the stage of the research project:formulating the question, collecting data, analysing data and deploying results;and (2) "who is experiencing" the context: the individual, the collective, or theresearcher. The arguments are illustrated from the empirical study.

    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-18488 (URN)
    Konferanse
    EARLI, Bi-annual Symposium, Fribourg, Switzerland
    Tilgjengelig fra: 2008-11-26 Laget: 2008-11-26 Sist oppdatert: 2018-01-12
  • 13.
    Bjurefors, Fredrik
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Measurements in opportunistic networks2012Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Opportunistic networks are a subset of delay tolerant networks where the contacts are unscheduled. Such networks can be formed ad hoc by wireless devices, such as mobile phones and laptops. In this work we use a data-centric architecture for opportunistic networks to evaluate data dissemination overhead, congestion in nodes' buffer, and the impact of transfer ordering. Dissemination brings an overhead since data is replicated to be spread in the network and overhead leads to congestion, i.e., overloaded buffers.

    We develop and implement an emulation testbed to experimentally evaluate properties of opportunistic networks. We evaluate the repeatability of experiments in the emulated testbed that is based on virtual computers. We show that the timing variations are on the order of milliseconds.

    The testbed was used to investigate overhead in data dissemination, congestion avoidance, and transfer ordering in opportunistic networks. We show that the overhead can be reduced by informing other nodes in the network about what data a node is carrying. Congestion avoidance was evaluated in terms of buffer management, since that is the available tool in an opportunistic network, to handle congestion. It was shown that replication information of data objects in the buffer yields the best results. We show that in a data-centric architecture were each data item is valued differently, transfer ordering is important to achieve delivery of the most valued data.

  • 14.
    Bjurefors, Fredrik
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Opportunistic Networking: Congestion, Transfer Ordering and Resilience2014Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Opportunistic networks are constructed by devices carried by people and vehicles. The devices use short range radio to communicate. Since the network is mobile and often sparse in terms of node contacts, nodes store messages in their buffers, carrying them, and forwarding them upon node encounters. This form of communication leads to a set of challenging issues that we investigate: congestion, transfer ordering, and resilience.

    Congestion occurs in opportunistic networks when a node's buffers becomes full. To be able to receive new messages, old messages have to be evicted. We show that buffer eviction strategies based on replication statistics perform better than strategies that evict messages based on the content of the message.

    We show that transfer ordering has a significant impact on the dissemination of messages during time limited contacts. We find that transfer strategies satisfying global requests yield a higher delivery ratio but a longer delay for the most requested data compared to satisfying the neighboring node's requests.

    Finally, we assess the resilience of opportunistic networks by simulating different types of attacks. Instead of enumerating all possible attack combinations, which would lead to exhaustive evaluations, we introduce a method that use heuristics to approximate the extreme outcomes an attack can have. The method yields a lower and upper bound for the evaluated metric over the different realizations of the attack. We show that some types of attacks are harder to predict the outcome of and other attacks may vary in the impact of the attack due to the properties of the attack, the forwarding protocol, and the mobility pattern.

    Delarbeid
    1. Haggle Testbed: a Testbed for Opportunistic Networks
    Åpne denne publikasjonen i ny fane eller vindu >>Haggle Testbed: a Testbed for Opportunistic Networks
    2011 (engelsk)Inngår i: In Proceedings of the 7th Swedish National Computer Networking Workshop, 2011Konferansepaper, Publicerat paper (Fagfellevurdert)
    Identifikatorer
    urn:nbn:se:uu:diva-155530 (URN)
    Prosjekter
    Haggle
    Tilgjengelig fra: 2011-06-23 Laget: 2011-06-23 Sist oppdatert: 2014-06-30
    2. Congestion Avoidance in a Data-Centric Opportunistic Network
    Åpne denne publikasjonen i ny fane eller vindu >>Congestion Avoidance in a Data-Centric Opportunistic Network
    2011 (engelsk)Inngår i: Proceedings of the 2011 ACM SIGCOMM Workshop on Information-Centric Networking (ICN-2011), 2011Konferansepaper, Publicerat paper (Fagfellevurdert)
    Identifikatorer
    urn:nbn:se:uu:diva-155528 (URN)
    Prosjekter
    ResumeNet
    Tilgjengelig fra: 2011-06-23 Laget: 2011-06-23 Sist oppdatert: 2014-06-30
    3. Making the Most of Your Contacts: Transfer Ordering in Data-Centric Opportunistic Networks
    Åpne denne publikasjonen i ny fane eller vindu >>Making the Most of Your Contacts: Transfer Ordering in Data-Centric Opportunistic Networks
    Vise andre…
    2012 (engelsk)Inngår i: Proceedings of the 2012 ACM MobiOpp Workshop on Mobile Opportunistic Networks, Zürich: ACM Press, 2012Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    Opportunistic networks use unpredictable and time-limited con- tacts to disseminate data. Therefore, it is important that protocols transfer useful data when contacts do occur. Specifically, in a data- centric network, nodes benefit from receiving data relevant to their interests. To this end, we study five strategies to select and order the data to be exchanged during a limited contact, and measure their ability to promptly and efficiently deliver highly relevant data.

    Our trace-driven experiments on an emulation testbed suggest that nodes benefit in the short-term from ordering data transfers to satisfy local interests. However, this can lead to suboptimal longterm system performance. Restricting sharing based on matching nodes’ interests can lead to segregation of the network, and limit useful dissemination of data. A non-local understanding of other nodes’ interests is necessary to effectively move data across the network. If ordering of transfers for data relevance is not explicitly considered performance is comparable to random, which limits the delivery of individually relevant data. 

    sted, utgiver, år, opplag, sider
    Zürich: ACM Press, 2012
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-171587 (URN)
    Konferanse
    ACM MobiOpp
    Prosjekter
    ResumeNet
    Tilgjengelig fra: 2012-03-22 Laget: 2012-03-22 Sist oppdatert: 2014-06-30
    4. Resilience and Opportunistic Forwarding: Beyond Average Value Analysis
    Åpne denne publikasjonen i ny fane eller vindu >>Resilience and Opportunistic Forwarding: Beyond Average Value Analysis
    Vise andre…
    2014 (engelsk)Inngår i: Computer Communications, ISSN 0140-3664, E-ISSN 1873-703X, Vol. 48, nr SI, s. 111-120Artikkel i tidsskrift (Fagfellevurdert) Published
    Abstract [en]

    Opportunistic networks are systems with highly distributed operation, relying on the altruistic cooperation of highly heterogeneous, and not always software and hardware-compatible, user nodes. Moreover, the absence of central coordination and control makes them vulnerable to malicious attacks. In this paper, we study the resilience of popular forwarding protocols to a representative set of challenges to their normal operation. These include jamming locally disturbing message transfer between nodes, hardware/software failures and incompatibility among nodes rendering contact opportunities useless, and free-riding phenomena. We first formulate and promote the metric envelope concept as a tool for assessing the resilience of opportunistic forwarding schemes. Metric envelopes depart from the standard practice of average value analysis and explicitly account for the differentiated challenge impact due to node heterogeneity (device capabilities, mobility) and attackers’ intelligence. We then propose heuristics to generate worst- and best-case challenge realization scenarios and approximate the lower and upper bounds of the metric envelopes. Finally, we demonstrate the methodology in assessing the resilience of three popular forwarding protocols in the presence of the three challenges, and under a comprehensive range of mobility patterns. The metric envelope approach provides better insights into the level of protection path diversity and message replication provide against different challenges, and enables more informed choices in opportunistic forwarding when network resilience becomes important.

    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-222822 (URN)10.1016/j.comcom.2014.04.004 (DOI)000337883200010 ()
    Prosjekter
    ResumeNet, WISENET
    Forskningsfinansiär
    EU, FP7, Seventh Framework Programme, FP7-224619
    Merknad

    Special Issue

    Tilgjengelig fra: 2014-04-17 Laget: 2014-04-14 Sist oppdatert: 2017-12-05bibliografisk kontrollert
  • 15.
    Blom, Johan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Model-Based Protocol Testing in an Erlang Environment2016Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    Testing is the dominant technique for quality assurance of software systems. It typically consumes considerable resources in development projects, and is often performed in an ad hoc manner. This thesis is concerned with model-based testing, which is an approach to make testing more systematic and more automated. The general idea in model-based testing is to start from a formal model, which captures the intended behavior of the software system to be tested. On the basis of this model, test cases can be generated in a systematic way. Since the model is formal, the generation of test suites can be automated and with adequate tool support one can automatically quantify to which degree they exercise the tested software.

    Despite the significant improvements on model-based testing in the last 20 years, acceptance by industry has so far been limited. A number of commercially available tools exist, but still most testing in industry relies on manually constructed test cases.

    This thesis address this problem by presenting a methodology and associated tool support, which is intended to be used for model-based testing of communication protocol implementations in industry. A major goal was to make the developed tool suitable for industrial usage, implying that we had to consider several problems that typically are not addressed by the literature on model-based testing. The thesis presents several technical contributions to the area of model-based testing, including

    - a new specification language based on the functional programming language Erlang,

    - a novel technique for specifying coverage criteria for test suite generation, and

    - a technique for automatically generating test suites.

    Based on these developments, we have implemented a complete tool chain that generates and executes complete test suites, given a model in our specification language. The thesis also presents a substantial industrial case study, where our technical contributions and the implemented tool chain are evaluated. Findings from the case study include that test suites generated using (model) coverage criteria have at least as good fault-detection capability as equally large random test suites, and that model-based testing could discover faults in previously well-tested software where previous testing had employed a relaxed validation of requirements.

  • 16.
    Bohlin, Therese
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Regular Inference for Communication Protocol Entities2009Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    A way to create well-functioning computer systems is to automate error detection in the systems. Automated techniques for finding errors, such as testing and formal verification, requires a model of the system. The technique for constructing deterministic finite automata (DFA) models, without access to the source code, is called regular inference. The technique provides sequences of input, so called membership queries, to a system, observes the responses, and infers a model from the input and responses.

    This thesis presents work to adapt regular inference to a certain kind of systems: communication protocol entities. Such entities interact by sending and receiving messages consisting of a message type and a number of parameters, each of which potentially can take on a large number of values. This may cause a model of a communication protocol entity inferred by regular inference, to be very large and take a long time to infer. Since regular inference creates a model from the observed behavior of a communication protocol entity, the model may be very different from a designer's model of the system's source code.

    This thesis presents adaptations of regular inference to infer more compact models and use less membership queries. The first contribution is a survey over three algorithms for regular inference. We present their similarities and their differences in terms of the required number of membership queries. The second contribution is an investigation on how many membership queries a common regular inference algorithm, the L* algorithm by Angluin, requires for randomly generated DFAs and randomly generated DFAs with a structure common for communication protocol entities. In comparison, the DFAs with a structure common for communication protocol entities require more membership queries. The third contribution is an adaptation of regular inference to communication protocol entities which behavior foremost are affected by the message types. The adapted algorithm avoids asking membership queries containing messages with parameter values that results in already observed responses. The fourth contribution is an approach for regular inference of communication protocol entities which communicate with messages containing parameter values from very large ranges. The approach infers compact models, and uses parameter values taken from a small portion of their ranges in membership queries. The fifth contribution is an approach to infer compact models of communication protocol entities which have a similar partitioning of an entity's behavior into control states as in a designer's model of the protocol.

    Delarbeid
    1. Model Checking
    Åpne denne publikasjonen i ny fane eller vindu >>Model Checking
    2005 (engelsk)Inngår i: Model-Based Testing of Reactive Systems: Advanced Lectures, Berlin / Heidelberg: Springer , 2005, s. 557-603Kapittel i bok, del av antologi (Annet vitenskapelig)
    sted, utgiver, år, opplag, sider
    Berlin / Heidelberg: Springer, 2005
    Serie
    Lecture Notes in Computer Sciences: Programming and Software Engineering, ISSN 0302-9743 ; 3472
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-98082 (URN)978-3-540-26278-7 (ISBN)
    Tilgjengelig fra: 2009-02-06 Laget: 2009-02-06 Sist oppdatert: 2018-01-13bibliografisk kontrollert
    2. Insights to Angluin's Learning
    Åpne denne publikasjonen i ny fane eller vindu >>Insights to Angluin's Learning
    2005 (engelsk)Inngår i: Electronical Notes in Theoretical Computer Science, ISSN 1571-0661, E-ISSN 1571-0661, Vol. 118, s. 3-18Artikkel i tidsskrift (Fagfellevurdert) Published
    Abstract [en]

    Among other domains, learning finite-state machines is important for obtaining a model of a system under development, so that powerful formal methods such as model checking can be applied.

    A prominent algorithm for learning such devices was developed by Angluin. We have implemented this algorithm in a straightforward way to gain further insights to practical applicability. Furthermore, we have analyzed its performance on randomly generated as well as real-world examples. Our experiments focus on the impact of the alphabet size and the number of states on the needed number of membership queries. Additionally, we have implemented and analyzed an optimized version for learning prefix-closed regular languages. Memory consumption is one major obstacle when we attempted to learn large examples.

    We see that prefix-closed languages are relatively hard to learn compared to arbitrary regular languages. The optimization, however, shows positive results.

    Emneord
    deterministic finite-state automata, learning algorithm, regular languages, prefix-closed regular languages
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-98083 (URN)10.1016/j.entcs.2004.12.015 (DOI)
    Tilgjengelig fra: 2009-02-06 Laget: 2009-02-06 Sist oppdatert: 2018-01-13bibliografisk kontrollert
    3. Regular Inference for State Machines with Parameters
    Åpne denne publikasjonen i ny fane eller vindu >>Regular Inference for State Machines with Parameters
    2006 (engelsk)Inngår i: Fundamental approaches to software engineering: ( 9th international conference, FASE 2006, held as part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2006, Vienna, Austria, March 27-28, 2006 ), Berlin: Springer , 2006, s. 107-121Kapittel i bok, del av antologi (Annet vitenskapelig)
    Abstract [en]

    Techniques for inferring a regular language, in the form of a finite automaton, from a sufficiently large sample of accepted and nonaccepted input words, have been employed to construct models of software and hardware systems, for use, e.g., in test case generation. We intend to adapt these techniques to construct state machine models of entities of communication protocols. The alphabet of such state machines can be very large, since a symbol typically consists of a protocol data unit type with a number of parameters, each of which can assume many values. In typical algorithms for regular inference, the number of needed input words grows with the size of the alphabet and the size of the minimal DFA accepting the language. We therefore modify such an algorithm (Angluin's algorithm) so that its complexity grows not with the size of the alphabet, but only with the size of a certain symbolic representation of the DFA. The main new idea is to infer, for each state, a partitioning of input symbols into equivalence classes, under the hypothesis that all input symbols in an equivalence class have the same effect on the state machine. Whenever such a hypothesis is disproved, equivalence classes are refined. We show that our modification retains the good properties of Angluin's original algorithm, but that its complexity grows with the size of our symbolic DFA representation rather than with the size of the alphabet. We have implemented the algorithm; experiments on synthesized examples are consistent with these complexity results.

    sted, utgiver, år, opplag, sider
    Berlin: Springer, 2006
    Serie
    Lecture notes in computer science, ISSN 0302-9743
    Emneord
    Test generation, Algorithm complexity, Modeling, Equivalence classes, Deterministic automaton, Data type, Transmission protocol, Finite automaton, Regular language, Inference, Software development
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-98084 (URN)3-540-33093-3 (ISBN)
    Tilgjengelig fra: 2009-02-06 Laget: 2009-02-06 Sist oppdatert: 2018-01-13bibliografisk kontrollert
    4. Regular Inference for State Machines Using Domains with Equality Tests
    Åpne denne publikasjonen i ny fane eller vindu >>Regular Inference for State Machines Using Domains with Equality Tests
    2008 (engelsk)Inngår i: Fundamental Approaches to Software Engineering / [ed] Fiadeiro JL; Inverardi P, Berlin: Springer-Verlag , 2008, s. 317-331Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    Existing algorithms for regular inference (aka automata learning) allows to infer a finite state machine by observing the output that the machine produces in response to a selected sequence of input strings. We generalize regular inference techniques to infer a class of state machines with an infinite state space. We consider Mealy machines extended with state variables that can assume values from a potentially unbounded domain. These values can be passed as parameters in input and output symbols, and can be used in tests for equality between state variables and/or message parameters. This is to our knowledge the first extension of regular inference to infinite-state systems. We intend to use these techniques to generate models of communication protocols from observations of their input-output behavior. Such protocols often have parameters that represent node adresses, connection identifiers, etc. that have a large domain, and on which test for equality is the only meaningful operation. Our extension consists of two phases. In the first phase we apply an existing inference technique for finite-state Mealy machines to generate a model for the case that the values are taken from a small data domain. In the second phase we transform this finite-state Mealy machine into an infinite-state Mealy machine by folding it into a compact symbolic form.

    sted, utgiver, år, opplag, sider
    Berlin: Springer-Verlag, 2008
    Serie
    Lecture Notes in Computer Science ; 4961
    HSV kategori
    Identifikatorer
    urn:nbn:se:uu:diva-98085 (URN)10.1007/978-3-540-78743-3_24 (DOI)000254603000024 ()978-3-540-78742-6 (ISBN)
    Konferanse
    11th International Conference on Fundamental Approaches to Software Engineering Budapest, HUNGARY, MAR 29-APR 06, 2008
    Tilgjengelig fra: 2009-02-06 Laget: 2009-02-06 Sist oppdatert: 2018-01-13bibliografisk kontrollert
    5. Regular Inference for Communication Protocol Entities
    Åpne denne publikasjonen i ny fane eller vindu >>Regular Inference for Communication Protocol Entities
    2008 (engelsk)Rapport (Annet vitenskapelig)
    Serie
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2008-024
    Identifikatorer
    urn:nbn:se:uu:diva-98086 (URN)
    Tilgjengelig fra: 2009-02-06 Laget: 2009-02-06 Sist oppdatert: 2009-06-09bibliografisk kontrollert
  • 17.
    Borgh, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik.
    Attribute-Based Encryption in Systems with Resource Constrained Devices in an Information Centric Networking Context2016Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    An extensive analysis of attribute-based encryption (ABE) in systems with resource constrained devices is performed. Two system solutions of how ABE can be performed in such systems are proposed, one where the ABE operations are performed at the resource constrained devices and one where ABE is performed at a powerful server. The system solutions are discussed with three different ABE schemes. Two of the schemes are the traditional key policy ABE (KP-ABE) and ciphertext policy ABE (CP-ABE). The third scheme is using KP-ABE to simulate CP-ABE, in an attempt to benefit from KP-ABE being computationally cheaper than CP-ABE while maintaining the intuitive way of using CP-ABE.

    ABE is a computationally expensive encryption method which might not be feasible to perform at the resource constrained sensors, depending on the hardware.

    An implementation of a CP-ABE scheme with a 128 bit security level was written and used to evaluate the feasibility of ABE on a sensor equipped with an ARM Cortex-M3 processor having 32 kB RAM and 256 kB flash. It is possible to perform CP-ABE on the sensor used in this project. The limiting factor of feasibility of ABE on the sensor is the RAM size. In this case policy sizes up to 12 attributes can be performed on the sensor.

    The results give an idea of the feasibility of encryption with ABE on sensors. In addition to the results several ways of improving performance of ABE on the sensor are discussed.

  • 18.
    Bälter Eronell, Sofia
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik.
    Lindvall, Lisa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik.
    Framtagning av affärsmodell inom Internet of Things: En studie om hur ett IT-konsultbolag kan verka som integratör inom IoT-ekosystemet2016Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    This study examines how an Information Technology consulting firm can act as an integrator for the Internet of Things. The aim is to contribute to a greater understanding of how the IoT-ecosystem looks and what roles an integrator can take in collaboration with partners. In order to create a deeper understanding of the topic a qualitative study was conducted with Softhouse's partners, customers, and themselves, in order to place them within the IoT-ecosystem. The study focused on examining how IoT solutions can be implemented in the forestry industry. The results show that Softhouse has a great potential to offer IoT solutions by a solid collaboration with partners. They should focus on becoming experts in data analysis through training and recruitment. Selection of partners for different projects depends on its size, complexity

    and type. Through analysis and by using the business model Business Model Canvas it is possible to see which partners are most suitable for which type of project. This was applied to two such cases with clients in the forest industry; Södra Skog and APEA Mobile Security Solutions. 

  • 19.
    Cambazoglu, Volkan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorarkitektur och datorkommunikation.
    Protocol, mobility and adversary models for the verification of security2016Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The increasing heterogeneity of communicating devices, ranging from resource constrained battery driven sensor nodes to multi-core processor computers, challenges protocol design. We examine security and privacy protocols with respect to exterior factors such as users, adversaries, and computing and communication resources; and also interior factors such as the operations, the interactions and the parameters of a protocol.

    Users and adversaries interact with security and privacy protocols, and even affect the outcome of the protocols. We propose user mobility and adversary models to examine how the location privacy of users is affected when they move relative to each other in specific patterns while adversaries with varying strengths try to identify the users based on their historical locations. The location privacy of the users are simulated with the support of the K-Anonymity protection mechanism, the Distortion-based metric, and our models of users' mobility patterns and adversaries' knowledge about users.

    Security and privacy protocols need to operate on various computing and communication resources. Some of these protocols can be adjusted for different situations by changing parameters. A common example is to use longer secret keys in encryption for stronger security. We experiment with the trade-off between the security and the performance of the Fiat–Shamir identification protocol. We pipeline the protocol to increase its utilisation as the communication delay outweighs the computation.

    A mathematical specification based on a formal method leads to a strong proof of security. We use three formal languages with their tool supports in order to model and verify the Secure Hierarchical In-Network Aggregation (SHIA) protocol for Wireless Sensor Networks (WSNs). The three formal languages specialise on cryptographic operations, distributed systems and mobile processes. Finding an appropriate level of abstraction to represent the essential features of the protocol in three formal languages was central.

    Delarbeid