uu.seUppsala University Publications
Change search
Refine search result
12 1 - 50 of 80
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Adwent, Ann-Kristin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Shared IT Service Center i kommuner2012Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    To be able to maintain an adequate IT

    service for various users and needs,

    more municipalities are looking at the

    possibility for mergers of local IT

    departments. One solution for merging

    multiple services/functions and creating

    synergy opportunities is called Shared

    Service Center (SSC). The concept of SSC

    is that the administrative service

    itself becomes a core activity within

    the organization. The aim of this thesis

    is to provide municipalities who are

    considering a merging of their local IT

    departments with recommendations on how

    to design the Shared IT Service Center.

    Recommendations are outlined based on an

    analysis of IT-management literature,

    reports and by conducting a study on

    three ongoing collaborations.

    The conclusions drawn from the study

    suggest guidelines for the design of a

    Shared IT Service Center for

    municipalities. These are as following:

    Create a vision associated with a

    specific and structured target state.

    Identify needs for different target

    groups in municipalities and set a

    common standard.

    Create a clear and practical model/SLA

    appearance of the cooperation and

    agreement.

    Ensure the individual municipalities

    commissioning skills in order to not

    lose it in the context of a common IT

    operation.

    Find an organization that is democratic

    for member municipalities and

    facilitates long-term partnership.

    Specify the operation and maintenance so

    that it can be regulated and controlled

    Establish a common help desk.

    Establish a common standard and

    consolidated infrastructure before the

    introduction of a common technology platform.

  • 2.
    Andrejev, Andrej
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Semantic Web Queries over Scientific Data2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Semantic Web and Linked Open Data provide a potential platform for interoperability of scientific data, offering a flexible model for providing machine-readable and queryable metadata. However, RDF and SPARQL gained limited adoption within the scientific community, mainly due to the lack of support for managing massive numeric data, along with certain other important features – such as extensibility with user-defined functions, query modularity, and integration with existing environments and workflows.

    We present the design, implementation and evaluation of Scientific SPARQL – a language for querying data and metadata combined, represented using the RDF graph model extended with numeric multidimensional arrays as node values – RDF with Arrays. The techniques used to store RDF with Arrays in a scalable way and process Scientific SPARQL queries and updates are implemented in our prototype software – Scientific SPARQL Database Manager, SSDM, and its integrations with data storage systems and computational frameworks. This includes scalable storage solutions for numeric multidimensional arrays and an efficient implementation of array operations. The arrays can be physically stored in a variety of external storage systems, including files, relational databases, and specialized array data stores, using our Array Storage Extensibility Interface. Whenever possible SSDM accumulates array operations and accesses array contents in a lazy fashion.

    In scientific applications numeric computations are often used for filtering or post-processing the retrieved data, which can be expressed in a functional way. Scientific SPARQL allows expressing common query sub-tasks with functions defined as parameterized queries. This becomes especially useful along with functional language abstractions such as lexical closures and second-order functions, e.g. array mappers.

    Existing computational libraries can be interfaced and invoked from Scientific SPARQL queries as foreign functions. Cost estimates and alternative evaluation directions may be specified, aiding the construction of better execution plans. Costly array processing, e.g. filtering and aggregation, is thus preformed on the server, saving the amount of communication. Furthermore, common supported operations are delegated to the array storage back-ends, according to their capabilities. Both expressivity and performance of Scientific SPARQL are evaluated on a real-world example, and further performance tests are run using our mini-benchmark for array queries.

  • 3.
    Aronis, Stavros
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Effective Techniques for Stateless Model Checking2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Stateless model checking is a technique for testing and verifying concurrent programs, based on exploring the different ways in which operations executed by the processes of a concurrent program can be scheduled. The goal of the technique is to expose all behaviours that can be a result of scheduling non-determinism. As the number of possible schedulings is huge, however, techniques that reduce the number of schedulings that must be explored to achieve verification have been developed. Dynamic partial order reduction (DPOR) is a prominent such technique.

    This dissertation presents a number of improvements to dynamic partial order reduction that significantly increase the effectiveness of stateless model checking. Central among these improvements are the Source and Optimal DPOR algorithms (and the theoretical framework behind them) and a technique that allows the observability of the interference of operations to be used in dynamic partial order reduction. Each of these techniques can exponentially decrease the number of schedulings that need to be explored to verify a concurrent program. The dissertation also presents a simple bounding technique that is compatible with DPOR algorithms and effective for finding bugs in concurrent programs, if the number of schedulings is too big to make full verification possible in a reasonable amount of time, even when the improved algorithms are used.

    All improvements have been implemented in Concuerror, a tool for applying stateless model checking to Erlang programs. In order to increase the effectiveness of the tool, the interference of the high-level operations of the Erlang/OTP implementation is examined, classified and precisely characterized. Aspects of the implementation of the tool are also described. Finally, a use case is presented, showing how Concuerror was used to find bugs and verify key correctness properties in repair techniques for the CORFU chain replication protocol.

    List of papers
    1. Source Sets: A Foundation for Optimal Dynamic Partial Order Reduction
    Open this publication in new window or tab >>Source Sets: A Foundation for Optimal Dynamic Partial Order Reduction
    2017 (English)In: Journal of the ACM, ISSN 0004-5411, E-ISSN 1557-735X, Vol. 64, no 4, article id 25Article in journal (Refereed) Published
    Abstract [en]

    Stateless model checking is a powerful method for program verification that, however, suffers from an exponential growth in the number of explored executions. A successful technique for reducing this number, while still maintaining complete coverage, is Dynamic Partial Order Reduction (DPOR), an algorithm originally introduced by Flanagan and Godefroid in 2005 and since then not only used as a point of reference but also extended by various researchers. In this article, we present a new DPOR algorithm, which is the first to be provably optimal in that it always explores the minimal number of executions. It is based on a novel class of sets, called source sets, that replace the role of persistent sets in previous algorithms. We begin by showing how to modify the original DPOR algorithm to work with source sets, resulting in an efficient and simple-to-implement algorithm, called source-DPOR. Subsequently, we enhance this algorithm with a novel mechanism, called wakeup trees, that allows the resulting algorithm, called optimal-DPOR, to achieve optimality. Both algorithms are then extended to computational models where processes may disable each other, for example, via locks. Finally, we discuss tradeoffs of the source-and optimal-DPOR algorithm and present programs that illustrate significant time and space performance differences between them. We have implemented both algorithms in a publicly available stateless model checking tool for Erlang programs, while the source-DPOR algorithm is at the core of a publicly available stateless model checking tool for C/pthread programs running on machines with relaxed memory models. Experiments show that source sets significantly increase the performance of stateless model checking compared to using the original DPOR algorithm and that wakeup trees incur only a small overhead in both time and space in practice.

    Place, publisher, year, edition, pages
    Association for Computing Machinery (ACM), 2017
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:uu:diva-331842 (URN)10.1145/3073408 (DOI)000410615900002 ()
    Projects
    UPMARCRELEASE
    Funder
    EU, FP7, Seventh Framework Programme, 287510Swedish Research Council
    Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2018-01-13Bibliographically approved
    2. The shared-memory interferences of Erlang/OTP built-ins
    Open this publication in new window or tab >>The shared-memory interferences of Erlang/OTP built-ins
    2017 (English)In: Proceedings Of The 16Th Acm Sigplan International Workshop On Erlang (Erlang '17) / [ed] Chechina, N.; Fritchie, SL., New York: Association for Computing Machinery (ACM), 2017, p. 43-54Conference paper, Published paper (Refereed)
    Abstract [en]

    Erlang is a concurrent functional language based on the actor modelof concurrency. In the purest form of this model, actors are realizedby processes that do not share memory and communicate witheach other exclusively via message passing. Erlang comes quiteclose to this model, as message passing is the primary form of interprocesscommunication and each process has its own memoryarea that is managed by the process itself. For this reason, Erlangis often referred to as implementing “shared nothing” concurrency.Although this is a convenient abstraction, in reality Erlang’s mainimplementation, the Erlang/OTP system, comes with a large numberof built-in operations that access memory which is shared byprocesses. In this paper, we categorize these built-ins, and characterizethe interferences between them that can result in observabledifferences of program behaviour when these built-ins are usedin a concurrent setting. The paper is complemented by a publiclyavailable suite of more than one hundred small Erlang programsthat demonstrate the racing behaviour of these built-ins.

    Place, publisher, year, edition, pages
    New York: Association for Computing Machinery (ACM), 2017
    Keywords
    Actors, BEAM, Concuerror, Erlang, Scheduling nondeterminism
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:uu:diva-331840 (URN)10.1145/3123569.3123573 (DOI)000426922100005 ()978-1-4503-5179-9 (ISBN)
    Conference
    16th ACM SIGPLAN International Workshop on Erlang (Erlang), Sep 08, 2017 , Oxford, England.
    Projects
    UPMARC
    Funder
    Swedish Research Council
    Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2018-07-03Bibliographically approved
    3. Testing And Verifying Chain Repair Methods For CORFU Using Stateless Model Checking
    Open this publication in new window or tab >>Testing And Verifying Chain Repair Methods For CORFU Using Stateless Model Checking
    2017 (English)Conference paper, Published paper (Refereed)
    Abstract [en]

    Corfu is a distributed shared log that is designed to be scalable and reliable in the presence of failures and asynchrony. Internally, Corfu is fully replicated for fault tolerance, without sharding data or sacrificing strong consistency. In this case study, we present the modeling approaches we followed to test and verify, using Concuerror, the correctness of repair methods for the Chain Replication protocol suitable for Corfu. In the first two methods we tried, Concuerror located bugs quite fast. In contrast, the tool did not manage to find bugs in the third method, but the time this took also motivated an improvement in the tool that reduces the number of traces explored. Besides more details about all the above, we present experiences and lessons learned from applying stateless model checking for verifying complex protocols suitable for distributed programming.

    Place, publisher, year, edition, pages
    Cham: Springer, 2017
    Series
    Lecture Notes in Computer Science ; 10510
    National Category
    Computer Systems Software Engineering
    Identifiers
    urn:nbn:se:uu:diva-331836 (URN)10.1007/978-3-319-66845-1_15 (DOI)978-3-319-66844-4 (ISBN)978-3-319-66845-1 (ISBN)
    Conference
    Integrated Formal Methods. IFM 2017
    Projects
    UPMARC
    Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2018-01-13Bibliographically approved
    4. Optimal dynamic partial order reduction with observers
    Open this publication in new window or tab >>Optimal dynamic partial order reduction with observers
    2018 (English)In: Tools and Algorithms for the Construction and Analysis of Systems: Part II, Springer, 2018, Vol. 10806, p. 229-248Conference paper, Published paper (Refereed)
    Abstract [en]

    Dynamic partial order reduction (DPOR) algorithms are used in stateless model checking (SMC) to combat the combinatorial explosion in the number of schedulings that need to be explored to guarantee soundness. The most effective of them, the Optimal DPOR algorithm, is optimal in the sense that it explores only one scheduling per Mazurkiewicz trace. In this paper, we enhance DPOR with the notion of observability, which makes dependencies between operations conditional on the existence of future operations, called observers. Observers naturally lead to a lazy construction of dependencies. This requires significant changes in the core of POR algorithms (and Optimal DPOR in particular), but also makes the resulting algorithm, Optimal DPOR with Observers, super-optimal in the sense that it explores exponentially less schedulings than Mazurkiewicz traces in some cases. We argue that observers come naturally in many concurrency models, and demonstrate the performance benefits that Optimal DPOR with Observers achieves in both an SMC tool for shared memory concurrency and a tool for concurrency via message passing, using both synthetic and actual programs as benchmarks.

    Place, publisher, year, edition, pages
    Springer, 2018
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 10806
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-333508 (URN)10.1007/978-3-319-89963-3 _ 14 (DOI)000445822600014 ()978-3-319-89962-6 (ISBN)978-3-319-89963-3 (ISBN)
    Conference
    TACAS 2018, April 14–20, Thessaloniki, Greece.
    Projects
    UPMARC
    Funder
    Swedish Research Council
    Available from: 2018-04-14 Created: 2017-11-21 Last updated: 2019-01-07Bibliographically approved
  • 4.
    Astrid, Berghult
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    A practical comparison between algebraic and statistical attacks on the lightweight cipher SIMON2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the summer of 2013 NSA released a new family of lightweight block ciphers called SIMON. However they did not publish any assessment of the security of SIMON. Since then only a few papers on this topic have been released and none of them have included an algebraic analysis. Moreover only one paper described a practical implementation of the attack. This master thesis aims to implement a practical attack, both algebraic and differential, on SIMON. In doing so we are able to make a comparison between the two different attack methods. The algebraic attack was executed with SAT-solver CryptoMiniSat2 and could break 7 rounds. The differential attack was implemented in three steps. First we created a difference distribution table (DDT) and then we identified a differential by a search algorithm for the DDT. In the last step we designed a key recovery attack to recover the last round key. The attack could break 13 rounds for a 9 round differential. With a simple expansion on the key recovery attack it has the potential to break even more rounds for the same 9 round differential. This indicate that algebraic cryptanalysis might not be such a strong tool since it could only break 7 rounds. Furthermore, if a generic algebraic attack does not work on SIMON it has little or no chance of being successful on a more complex cipher. In other words this algebraic attack may serve as a benchmark for the efficiency of generic algebraic attacks.

  • 5.
    Axel, Lindegren
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Partitioning temporal networks: A study of finding the optimal partition of temporal networks using community detection2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Many of the algorithms used for community detection in temporal networks have been adapted from static network theory. A common approach in dealing with the temporal dimension is to create multiple static networks from one temporal, based on a time condition. In this thesis, focus lies on identifying the optimal partitioning of a few temporal networks. This is done by utilizing the popular community detection algorithm called Generalized Louvain. Output of the Generalized Louvain comes in two parts. First, the created community structure, i.e. how the network is connected. Secondly, a measure called modularity, which is a scalar value representing the quality of the identified community structure. The methodology used is aimed at creating a comparable result by normalizing modularity. The normalization process can be explained in two major steps: 1) study the effects on modularity when partitioning a temporal network in an increasing number of slices. 2) study the effects on modularity when varying the number of connections (edges) in each time slice. The results show that the created methodology yields comparable results on two out of the four here tested temporal networks, implying that it might be more suited for some networks than others. This can serve as an indication that there does not exist a general model for community detection in temporal networks. Instead, the type of network is key to choosing the method.

  • 6.
    Aziz, Yama
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Exploring a keyword driven testing framework: a case study at Scania IT2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this thesis is to investigate organizational quality assurance through the international testing standard ISO 29119. The focus will be on how an organization carries out testing processes and designs and implements test cases. Keyword driven testing is a test composition concept in ISO 29119 and suitable for automation. This thesis will answer how keyword driven testing can facilitate the development of maintainable test cases and support test automation in an agile organization.

    The methodology used was a qualitative case study including semi-structured interviews and focus groups with agile business units within Scania IT. Among the interview participants were developers, test engineers, scrum masters and a unit manager.

    The results describe testing practices carried out in several agile business units, maintainability issues with test automation and general ideas of how test automation should be approached. Common issues with test automation were test cases failing due to changed test inputs, inexperience with test automation frameworks and lack of resources due to project release cycle.

    This thesis concludes that keyword driven testing has the potential of solving several maintainability issues with test cases breaking. However, the practicality and effectiveness of said potential remain unanswered. Moreover, successfully developing an automated keyword driven testing framework requires integration with existing test automation tools and considering the agile organizational circumstances.

  • 7.
    Backlund, Ludvig
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    A technical overview of distributed ledger technologies in the Nordic capital market.2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis examines how Distributed Ledger Technologies (DLTs) could be utilized in capital markets in general and in the Nordic capital market in particular. DLTs were introduced with the so called cryptocurrency Bitcoin in 2009 and has in the last few years been of interest to various financial institutions as a means to streamline financial processes. By combining computer scientific concepts such as public-key cryptography and consensus algorithms DLTs make it possible to keep shared databases with limited trust among the participators and without the use of a trusted third party. In this thesis various actors on the Nordic capital market were interviewed and their stance on DLTs were summarized. In addition to this a Proof of Concept of a permissioned DLT application for ownership registration of securities was constructed. It was found that all the interviewees were generally optimistic about DLTs potential to increase the efficiency of capital markets. The technology needs to be adopted to handle the capital markets demand for privacy and large transaction volumes, but there is a general agreement among the interviewees that these issues will be solved. The biggest challenge for an adoption of DLTs seem to lie in that of finding a common industry-wide standard.

  • 8.
    Badiozamany, Sobhan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Real-time data stream clustering over sliding windows2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In many applications, e.g. urban traffic monitoring, stock trading, and industrial sensor data monitoring, clustering algorithms are applied on data streams in real-time to find current patterns. Here, sliding windows are commonly used as they capture concept drift.

    Real-time clustering over sliding windows is early detection of continuously evolving clusters as soon as they occur in the stream, which requires efficient maintenance of cluster memberships that change as windows slide.

    Data stream management systems (DSMSs) provide high-level query languages for searching and analyzing streaming data. In this thesis we extend a DSMS with a real-time data stream clustering framework called Generic 2-phase Continuous Summarization framework (G2CS).  G2CS modularizes data stream clustering by taking as input clustering algorithms which are expressed in terms of a number of functions and indexing structures. G2CS supports real-time clustering by efficient window sliding mechanism and algorithm transparent indexing. A particular challenge for real-time detection of a high number of rapidly evolving clusters is efficiency of window slides for clustering algorithms where deletion of expired data is not supported, e.g. BIRCH. To that end, G2CS includes a novel window maintenance mechanism called Sliding Binary Merge (SBM). To further improve real-time sliding performance, G2CS uses generation-based multi-dimensional indexing where indexing structures suitable for the clustering algorithms can be plugged-in.

    List of papers
    1. Scalable ordered indexing of streaming data
    Open this publication in new window or tab >>Scalable ordered indexing of streaming data
    2012 (English)In: 3rd International Workshop on Accelerating Data Management Systems using Modern Processor and Storage Architectures, 2012, p. 11-Conference paper, Published paper (Refereed)
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-185068 (URN)
    Conference
    ADMS 2012, Istanbul, Turkey
    Projects
    eSSENCE
    Available from: 2012-08-27 Created: 2012-11-19 Last updated: 2018-01-12Bibliographically approved
    2. Grand challenge: Implementation by frequently emitting parallel windows and user-defined aggregate functions
    Open this publication in new window or tab >>Grand challenge: Implementation by frequently emitting parallel windows and user-defined aggregate functions
    Show others...
    2013 (English)In: Proc. 7th ACM International Conference on Distributed Event-Based Systems, New York: ACM Press, 2013, p. 325-330Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    New York: ACM Press, 2013
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-211954 (URN)10.1145/2488222.2488284 (DOI)978-1-4503-1758-0 (ISBN)
    External cooperation:
    Conference
    DEBS 2013
    Available from: 2013-06-29 Created: 2013-12-03 Last updated: 2018-01-11Bibliographically approved
    3. Distributed multi-query optimization of continuous clustering queries
    Open this publication in new window or tab >>Distributed multi-query optimization of continuous clustering queries
    2014 (English)In: Proc. VLDB 2014 PhD Workshop, 2014Conference paper, Published paper (Refereed)
    Abstract [en]

    This work addresses the problem of sharing execution plans for queries that continuously cluster streaming data to provide an evolving summary of the data stream. This is challenging since clustering is an expensive task, there might be many clustering queries running simultaneously, each continuous query has a long life time span, and the execution plans often overlap. Clustering is similar to conventional grouped aggregation but cluster formation is more expensive than group formation, which makes incremental maintenance more challenging. The goal of this work is to minimize response time of continuous clustering queries with limited resources through multi-query optimization. To that end, strategies for sharing execution plans between continuous clustering queries are investigated and the architecture of a system is outlined that optimizes the processing of multiple such queries. Since there are many clustering algorithms, the system should be extensible to easily incorporate user defined clustering algorithms.

    National Category
    Computer Sciences
    Research subject
    Computer Science with specialization in Database Technology
    Identifiers
    urn:nbn:se:uu:diva-302790 (URN)
    External cooperation:
    Conference
    VLDB 2014
    Available from: 2016-09-09 Created: 2016-09-09 Last updated: 2018-01-10Bibliographically approved
    4. Framework for real-time clustering over sliding windows
    Open this publication in new window or tab >>Framework for real-time clustering over sliding windows
    2016 (English)In: Proc. 28th International Conference on Scientific and Statistical Database Management, New York: ACM Press, 2016, p. 1-13, article id 19Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    New York: ACM Press, 2016
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-302792 (URN)10.1145/2949689.2949696 (DOI)978-1-4503-4215-5 (ISBN)
    External cooperation:
    Conference
    SSDBM 2016
    Available from: 2016-07-18 Created: 2016-09-09 Last updated: 2018-01-10Bibliographically approved
  • 9.
    Bergquist, Jonatan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Blockchain Technology and Smart Contracts: Privacy-preserving Tools2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this Master's thesis is to explore blockchain technology and smart contracts as a way of building privacy-sensitive applications. The main focus is on a medication plan containing prescriptions, built on a blockchain system of smart contracts. This is an example use case, but the results can be transferred to other ones where sensitive data is being shared and a proof of validity or authentication is needed. First the problem is presented, why medication plans are in need of digitalisation and why blockchain technology is a fitting technology for implementing such an application. Then blockchain technology is explained, since it is a very new and relatively unfamiliar IT construct. Thereafter, a design is proposed for solving the problem. A system of smart contracts was built to prove how such an application can be built, and suggested guidelines for how a blockchain system should be designed to fulfil the requirements that were defined. Finally, a discussion is held regarding the applicability of different blockchain designs to the problem of privacy-handling applications.

  • 10.
    Bernström, Kristian
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Näsman, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Utredning och implementering av en prototyp för integration av Prevas FOCS och ABB 800xA2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    ABB and Prevas have initiated a collaboration to sell a system to optimize steel industry furnaces, called FOCS. The purpose of this thesis is to investigate possibilities for integrating Prevas FOCS and ABB 800xA.

    The result of the investigation is used for an implementation of a prototype of the integrated system. The study shows a general method that can be used when implementing two software systems. The prototype of the integrated systems is made with usability principles in mind. This is a very important aspect in order to create a good working environment for the operators of a steel plant. It is also important to follow communication standards when integrating software systems. In an industrial network used in the steel industry OPC is a standard for communication. We recommend ABB and Prevas to follow this standard when possible to make the integration smoother. To keep the cost of the integration to a minimum it is also recommended to reuse existing resources. This can however have a negative effect on usability and it is therefore important to keep a balance between cost and usability.

    The prototype made in this thesis accomplishes the goal of transferring the functionalities used by operators of Prevas FOCS to 800xA so that operators can control the processes using only one integrated system.

  • 11.
    Björdal, Gustav
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    String Variables for Constraint-Based Local Search2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    String variables occur as a natural part of many computationally challenging problems. Usually, such problems are solved using problem-specific algorithms implemented from first principles, which can be a time-consuming and error-prone task. A constraint solver is a framework that can be used to solve computationally challenging problems by first declaratively defining the problem and then solving it using specialised off-the-shelf algorithms, which can cut down development time significantly and result in faster solution times and higher solution quality. There are many constraint solving technologies, one of which is constraint-based local search (CBLS). However, very few constraint solvers have native support for solving problems with string variables. The goal of this thesis is to add string variables as a native type to the CBLS solver OscaR/CBLS. The implementation was experimentally evaluated on the Closest String Problem and the Word Equation System problem. The evaluation shows that string variables for CBLS can be a viable option for solving string problems. However, further work is required to obtain even more competitive performance.

  • 12.
    Björklund, Henrik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Combinatorial Optimization for Infinite Games on Graphs2005Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Games on graphs have become an indispensable tool in modern computer science. They provide powerful and expressive models for numerous phenomena and are extensively used in computer- aided verification, automata theory, logic, complexity theory, computational biology, etc.

    The infinite games on finite graphs we study in this thesis have their primary applications in verification, but are also of fundamental importance from the complexity-theoretic point of view. They include parity, mean payoff, and simple stochastic games.

    We focus on solving graph games by using iterative strategy improvement and methods from linear programming and combinatorial optimization. To this end we consider old strategy evaluation functions, construct new ones, and show how all of them, due to their structural similarities, fit into a unifying combinatorial framework. This allows us to employ randomized optimization methods from combinatorial linear programming to solve the games in expected subexponential time.

    We introduce and study the concept of a controlled optimization problem, capturing the essential features of many graph games, and provide sufficent conditions for solvability of such problems in expected subexponential time.

    The discrete strategy evaluation function for mean payoff games we derive from the new controlled longest-shortest path problem, leads to improvement algorithms that are considerably more efficient than the previously known ones, and also improves the efficiency of algorithms for parity games.

    We also define the controlled linear programming problem, and show how the games are translated into this setting. Subclasses of the problem, more general than the games considered, are shown to belong to NP intersection coNP, or even to be solvable by subexponential algorithms.

    Finally, we take the first steps in investigating the fixed-parameter complexity of parity, Rabin, Streett, and Muller games.

    List of papers
    1. A discrete subexponential algorithm for parity games
    Open this publication in new window or tab >>A discrete subexponential algorithm for parity games
    2003 In: STACS 2003, 20th Annual Symposium on Theoretical Aspects of Computer Science, 2003, p. 663-674Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92509 (URN)3-540-00623-0 (ISBN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    2. Complexity of model checking by iterative improvement: the pseudo-Boolean framework
    Open this publication in new window or tab >>Complexity of model checking by iterative improvement: the pseudo-Boolean framework
    2003 In: Perspectives of Systems Informatics: 5th International Andrei Ershov Memorial Conference, 2003, p. 381-394Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92510 (URN)3-540-20813-5 (ISBN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    3. Memoryless determinacy of parity and mean payoff games: a simple proof
    Open this publication in new window or tab >>Memoryless determinacy of parity and mean payoff games: a simple proof
    2004 In: Theoretical Computer Science, ISSN 0304-3975, Vol. 310, p. 365-378Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-92511 (URN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    4. A combinatorial strongly subexponential algorithm for mean payoff games
    Open this publication in new window or tab >>A combinatorial strongly subexponential algorithm for mean payoff games
    2004 In: Mathematical Foundations of Computer Science 2004, 2004, p. 673-685Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92512 (URN)3-540-22823-3 (ISBN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    5. The controlled linear programming problem: DIMACS Technical Report 2004-41, September 2004
    Open this publication in new window or tab >>The controlled linear programming problem: DIMACS Technical Report 2004-41, September 2004
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:uu:diva-92513 (URN)
    Available from: 2005-01-18 Created: 2005-01-18 Last updated: 2010-01-13Bibliographically approved
  • 13.
    Brandauer, Stephan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Structured Data2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    References are a programming language construct that lets a programmer access a datum invariant of its location.

    References permit aliasing -- several references to the same object, effectively making a single object accessible through different names (or paths). Aliasing, especially of mutable data, is both a blessing and a curse: when used correctly, it can make a programmer's life easier; when used incorrectly, for example through accidental aliases that the programmer is unaware of, aliasing can lead to hard to find bugs, and hard to verify programs.

    Aliases allow us to build efficient data structures by connecting objects together, making them immediately reachable. Aliases are at the heart of many useful programming idioms. But with great power comes great responsibility: unless a programmer carefully manages aliases in a program, aliases propagate changes and make parts of a program's memory change seemingly for no reason. Additionally, such bugs are very easy to make but very hard to track down.

    This thesis presents an overview of techniques for controlling how, when and if data can be aliased, as well as how and if data can be mutated. Additionally, it presents three different projects aimed at conserving the blessings, but reducing the curses. The first project is disjointness domains, a type system for expressing intended aliasing in a fine-grained manner so that aliasing will not be unexpected; the second project is Spencer, a tool to flexibly and precisely analyse the use of aliasing in programs to improve our understanding of how aliasing of mutable data is used in practise; and the third project is c flat, an approach for implementing high-level collection data structures using a richer reference construct that reduces aliasing problems but still retains many of aliasing's benefits.

    List of papers
    1. Disjointness Domains for Fine-Grained Aliasing
    Open this publication in new window or tab >>Disjointness Domains for Fine-Grained Aliasing
    2015 (English)Conference paper, Published paper (Refereed)
    Abstract [en]

    Aliasing is crucial for supporting useful implementation patterns, but it makes reasoning about programs difficult. To deal with this problem, numerous type-based aliasing control mechanisms have been proposed, expressing properties such as uniqueness. Uniqueness, however, is black-and-white: either a reference is unique or it can be arbitrarily aliased; and global: excluding aliases throughout the entire system, making code brittle to changing requirements. Disjointness domains, a new approach to alias control, address this problem by enabling more graduations between uniqueness and arbitrary reference sharing. They allow expressing aliasing constraints local to a certain set of variables (either stack variables or fields) for instance that no aliasing occurs between variables within some set of variables but between such sets or the opposite, that aliasing occurs within that set but not between different sets. A hierarchy of disjointness domains controls the flow of references through a program, helping the programmer reason about disjointness and enforce local alias invariants. The resulting system supports fine-grained control of aliasing between both variables and objects, making aliasing explicit to programmers, compilers, and tooling. This paper presents a formal account of disjointness domains along with examples. Disjointness domains provide novel means of expressing may-alias kinds of constraints, which may prove useful in compiler optimisation and verification.

    Series
    ACM SIGPLAN NOTICES, ISSN 0362-1340
    Keywords
    Design; Theory; Aliasing; mutable state; type systems; uniqueness; linear types
    National Category
    Electrical Engineering, Electronic Engineering, Information Engineering
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-268747 (URN)10.1145/2814270.2814280 (DOI)000367256500051 ()
    Conference
    ACM International Conference on Object Oriented Programming Systems Languages and Applications (OOPSLA)
    Available from: 2015-12-09 Created: 2015-12-09 Last updated: 2018-11-27Bibliographically approved
    2. Spencer: Interactive Heap Analysis for the Masses
    Open this publication in new window or tab >>Spencer: Interactive Heap Analysis for the Masses
    2017 (English)In: 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR 2017), IEEE, 2017, no 14, p. 113-123Conference paper, Published paper (Refereed)
    Abstract [en]

    Programming language-design and run-time-implementation require detailed knowledge about the programs that users want to implement. Acquiring this knowledge is hard, and there is little tool support to effectively estimate whether a proposed tradeoff actually makes sense in the context of real world applications. Ideally, knowledge about behaviour of "typical" programs is 1) easily obtainable, 2) easily reproducible, and 3) easily sharable. We present Spencer, an open source web service and API framework for dynamic analysis of a continuously growing set of traces of standard program corpora. Users do not obtain traces on their own, but can instead send queries to the web service that will be executed on a set of program traces. Queries are built in terms of a set of query combinators that present a high level interface for working with trace data. Since the framework is high level, and there is a hosted collection of recorded traces, queries are easy to implement. Since the data sets are shared by the research community, results are reproducible. Since the actual queries run on one (or many) servers that provide analysis as a service, obtaining results is possible on commodity hardware. Data in Spencer is meant to be obtained once, and analysed often, making the overhead of data collection mostly irrelevant. This allows Spencer to collect more data than traditional tracing tools can afford within their performance budget. Results in Spencer are cached, making complicated analyses that build on cached primitive queries speedy.

    Place, publisher, year, edition, pages
    IEEE, 2017
    Series
    IEEE International Working Conference on Mining Software Repositories, E-ISSN 2160-1852
    Keywords
    tracing, dynamic analysis, heap analysis, tracing
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-334818 (URN)10.1109/MSR.2017.35 (DOI)000425917100013 ()978-1-5386-1544-7 (ISBN)
    Conference
    IEEE/ACM 14th International Conference on Mining Software Repositories (MSR), Buenos Aires, Argentina, May 20-21, 2017
    Available from: 2017-11-28 Created: 2017-11-28 Last updated: 2018-11-27Bibliographically approved
    3. Mining for Safety using Interactive Trace Analysis
    Open this publication in new window or tab >>Mining for Safety using Interactive Trace Analysis
    2017 (English)In: Pre-Proceedings - Fifteenth International Workshop on Quantitative Aspects of Programming Languages and Systems, 2017, no 15, article id 14Conference paper, Published paper (Refereed)
    Keywords
    tracing, dynamic analysis, heap analysis, tracing
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-334817 (URN)
    Conference
    Fifteenth International Workshop on Quantitative Aspects of Programming Languages and Systems (QAPL)
    Available from: 2017-11-28 Created: 2017-11-28 Last updated: 2018-11-27Bibliographically approved
    4. C♭: A New Modular Approach to Implementing Efficient and Tunable Collections
    Open this publication in new window or tab >>C♭: A New Modular Approach to Implementing Efficient and Tunable Collections
    2018 (English)In: Proceedings of the 2018 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward! 2018), ACM , 2018, p. 57-71Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    ACM, 2018
    Keywords
    data structure design, domain specific language, performance tuning
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-366812 (URN)10.1145/3276954.3276956 (DOI)000455805700005 ()
    Conference
    SPLASH 2018 - Systems, Programming, Languages and Applications: Software for Humanity, Boston, 4-9 November, 2018
    Projects
    UPMARC2012-04867 Structured Aliasing
    Funder
    Swedish Research Council, 2012-04967
    Available from: 2018-11-26 Created: 2018-11-26 Last updated: 2019-02-04Bibliographically approved
  • 14.
    Bylund, Markus
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Personal service environments: Openness and user control in user-service interaction2001Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis describes my work with making the whole experience of using electronic services more pleasant and practical. More and more people use electronic services in their daily life — be it services for communicating with colleagues or family members, web-based bookstores, or network-based games for entertainment. However, electronic services in general are more difficult to use than they would have to be. They are limited in how and when users can access them. Services do not collaborate despite obvious advantages to their users, and they put the integrity and privacy of their users at risk.

    In this thesis, I argue that there are structural reasons for these problems rather than problems with content or the technology per se. The focus when designing electronic services tends to be on the service providers or on the artifacts that are used for accessing the services. I present an approach that focus on the user instead, which is based on the concept of personal service environments. These provide a mobile locale for storing and running electronic services of individual users. This gives the user increased control over which services to use, from where they can be accessed, and what personal information that services gather. The concept allows, and encourages, service collaboration, but not without letting the user maintain the control over the process. Finally, personal service environments allow continuous usage of services while switching between interaction devices and moving between places.

    The sView system, which is also described, implements personal service environments and serves as an example of how the concept can be realized. The system consists of two parts. The first part is a specification of how both services for sView and infrastructure for handling services should be developed. The second part is a reference implementation of the specification, which includes sample services that adds to and demonstrates the functionality of sView.

  • 15.
    Byström, Adam
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    From Intent to Code: Using Natural Language Processing2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Programming and the possibility to express one’s intent to a machine is becoming a very important skill in our digitalizing society. Today, instructing a machine, such as a computer to perform actions is done through programming. What if this could be done with human language? This thesis examines how new technologies and methods in the form of Natural Language Processing can be used to make programming more accessible by translating intent expressed in natural language into code that a computer can execute. Related research has studied using natural language as a programming language and using natural language to instruct robots. These studies have shown promising results but are hindered by strict syntaxes, limited domains and inability to handle ambiguity. Studies have also been made using Natural Language Processing to analyse source code, turning code into natural language. This thesis has the reversed approach. By utilizing Natural Language Processing techniques, an intent can be translated into code containing concepts such as sequential execution, loops and conditional statements. In this study, a system for converting intent, expressed in English sentences, into code is developed. To analyse this approach to programming, an evaluation framework is developed, evaluating the system during the development process as well as usage of the final system. The results show that this way of programming might have potential but conclude that the Natural Language Processing models still have too low accuracy. Further research is required to increase this accuracy to further assess the potential of this way of programming. 

  • 16.
    Carlsson, Elin
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Mattsson, Moa
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    The MaRiQ model: A quantitative approach to risk management2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In recent years, cyber attacks and data fraud have become major issues to companies, businesses and nation states alike. The need for more accurate and reliable risk management models is therefore substantial.

    Today, cybersecurity risk management is often carried out on a qualitative basis, where risks are evaluated to a predefined set of categories such as low, medium or high. This thesis aims to challenge that practice, by presenting a model that quantitatively assesses risks - therefore named MaRiQ (Manage Risks Quantitatively).

    MaRiQ was developed based on collected requirements and contemporary literature on quantitative risk management. The model consists of a clearly defined flowchart and a supporting tool created in Excel. To generate scientifically validated results, MaRiQ makes use of a number of statistical techniques and mathematical functions, such as Monte Carlo simulations and probability distributions.

    To evaluate whether our developed model really was an improvement compared to current qualitative processes, we conducted a workshop at the end of the project. The organization that tested MaRiQexperienced the model to be useful and that it fulfilled most of their needs.

    Our results indicate that risk management within cybersecurity can and should be performed using more quantitative approaches than what is praxis today. Even though there are several potential developments to be made, MaRiQ demonstrates the possible advantages of transitioning from qualitative to quantitative risk management processes.

  • 17.
    Carlsson, Per
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Algorithms for Electronic Power Markets2004Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis we focus resource allocation problems and electronic markets in particular. The main application area of ours is electricity markets. We present a number of algorithms and include practical experience.

    There is an ongoing restructuring of power markets in Europe and elsewhere, this implies that an industry that previously has been viewed as a natural monopoly becomes exposed to competition. In the thesis we move a step further suggesting that end users should take active part in the trade on power markets such as (i) day-ahead markets and (ii) markets handling close to real-time balancing of power grids. Our ideas and results can be utilised (a) to increase the efficiency of these markets and (b) to handle strained situations when power systems operate at their limits. For this we utilise information and communication technology available today and develop electronic market mechanisms designed for large numbers of participants typically distributed over a power grid.

    The papers of the thesis cover resource allocation with separable objective functions, a market mechanism that accepts actors with discontinuous demand, and mechanisms that allow actors to express combinatorial dependencies between traded commodities on multi-commodity markets. Further we present results from field tests and simulations.

    List of papers
    1. Resource Allocation With Wobbly Functions
    Open this publication in new window or tab >>Resource Allocation With Wobbly Functions
    2002 In: Computational Optimization and Applications, ISSN 0926-6003, Vol. 23, no 2, p. 171-200Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-92381 (URN)
    Available from: 2004-11-22 Created: 2004-11-22Bibliographically approved
    2. Extending Equilibrium Markets
    Open this publication in new window or tab >>Extending Equilibrium Markets
    2001 In: IEEE Intelligent Systems, ISSN 1094-7167, Vol. 16, no 4, p. 18-26Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-92382 (URN)
    Available from: 2004-11-22 Created: 2004-11-22Bibliographically approved
    3. Communication Test of Electronic Power Markets through Power Line Communication
    Open this publication in new window or tab >>Communication Test of Electronic Power Markets through Power Line Communication
    Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92383 (URN)
    Available from: 2004-11-22 Created: 2004-11-22Bibliographically approved
    4. A Tractable Mechanism for Time Dependent Markets
    Open this publication in new window or tab >>A Tractable Mechanism for Time Dependent Markets
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:uu:diva-92384 (URN)
    Available from: 2004-11-22 Created: 2004-11-22 Last updated: 2010-01-13Bibliographically approved
    5. A Flexible Model for Tree-Structured Multi-Commodity Markets
    Open this publication in new window or tab >>A Flexible Model for Tree-Structured Multi-Commodity Markets
    2007 (English)In: Electronic Commerce Research, ISSN 1389-5753, E-ISSN 1572-9362, Vol. 7, no 1, p. 69-88Article in journal (Refereed) Published
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-92385 (URN)10.1007/s10660-006-0063-Y (DOI)
    Available from: 2004-11-22 Created: 2004-11-22 Last updated: 2018-01-13Bibliographically approved
    6. Market Simulations
    Open this publication in new window or tab >>Market Simulations
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:uu:diva-92386 (URN)
    Available from: 2004-11-22 Created: 2004-11-22 Last updated: 2010-01-13Bibliographically approved
  • 18.
    Carlsson, Per
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Market and resource allocation algorithms with application to energy control2001Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    The energy markets of today are markets with rather few active participants. The participants are, with few exceptions, large producers and distributors. The market mechanisms that are used are constructed with this kind of a market situation in mind. With an automatic or semiautomatic approach, the market mechanism would be able to incorporate a larger number of participants. Smaller producers, and even consumers, could take an active part in the market. The gain is in more efficient markets, and – due to smaller fluctuations in demand – better resource usage from an environmental perspective.

    The energy markets of the Nordic countries (as well as many others) were deregulated during the last few years. The change has been radical and the situation is still rather new. We believe that the market can be made more efficient with the help of the dynamics of the small actors.

    The idealised world of theory (of economics) often relies on assumptions such as continuous demand and supply curves. These assumptions are useful, and they do not introduce problems in the power market situation of today, with relatively few, large, participants. When consumers and small producers are introduced on the market, the situation is different. Then it is a drawback if the market mechanims cannot handle discontinuous supply and demand.

    The growth in accessibility to computational power and data communications that we have experienced in the last years (and are experiencing) could be utilised when constructing mechanisms for the energy markets of tomorrow.

    In this thesis we suggest a new market mechanism, ConFAst, that utilises the technological progress to make it possible to incorporate a large number of active participants on the market. The mechanism does not rely on the assumptions above. The gain is a more efficient market with less fluctuations in demand over the day.

    To make this possible there is a need for efficient algorithms, in particular this mechanism relies on an efficient aggregation algorithm. An algorithm for aggregation of objective functions is part of this thesis. The algorithm handles maximisation with nonconcave, even noisy, objective functions. Experimental results show that the approach, in practically relevant cases, is significantly faster than the standard algorithm.

  • 19.
    Castegren, Elias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Capability-Based Type Systems for Concurrency Control2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Since the early 2000s, in order to keep up with the performance predictions of Moore's law, hardware vendors have had to turn to multi-core computers. Today, parallel hardware is everywhere, from massive server halls to the phones in our pockets. However, this parallelism does not come for free. Programs must explicitly be written to allow for concurrent execution, which adds complexity that is not present in sequential programs. In particular, if two concurrent processes share the same memory, care must be taken so that they do not overwrite each other's data. This issue of data-races is exacerbated in object-oriented languages, where shared memory in the form of aliasing is ubiquitous. Unfortunately, most mainstream programming languages were designed with sequential programming in mind, and therefore provide little or no support for handling this complexity. Even though programming abstractions like locks can be used to synchronise accesses to shared memory, the burden of using these abstractions correctly and efficiently is left to the programmer.

    The contribution of this thesis is programming language technology for controlling concurrency in the presence of shared memory. It is based on the concept of reference capabilities, which facilitate safe concurrent programming by restricting how memory may be accessed and shared. Reference capabilities can be used to enforce correct synchronisation when accessing shared memory, as well as to prevent unsafe sharing when using more fine-grained concurrency control, such as lock-free programming. This thesis presents the design of a capability-based type system with low annotation overhead, that can statically guarantee the absence of data-races without giving up object-oriented features like aliasing, subtyping and code reuse. The type system is formally proven safe, and has been implemented for the highly concurrent object-oriented programming language Encore.

    List of papers
    1. Reference Capabilities for Trait Based Reuse and Concurrency Control
    Open this publication in new window or tab >>Reference Capabilities for Trait Based Reuse and Concurrency Control
    2016 (English)Report (Other academic)
    Abstract [en]

    The proliferation of shared mutable state in object-orientedprogramming complicates software development as two seeminglyunrelated operations may interact via an alias and produceunexpected results. In concurrent programming this manifestsitself as data-races.

    Concurrent object-oriented programming further suffers from thefact that code that warrants synchronisation cannot easily bedistinguished from code that does not. The burden is placed solelyon the programmer to reason about alias freedom, sharing acrossthreads and side-effects to deduce where and when to applyconcurrency control, without inadvertently blocking parallelism.

    This paper presents a reference capability approach to concurrentand parallel object-oriented programming where all uses of aliasesare guaranteed to be data-race free. The static type of an aliasdescribes its possible sharing without using explicit ownership oreffect annotations. Type information can express non-interferingdeterministic parallelism without dynamic concurrency control,thread-locality, lock-based schemes, and guarded-by relationsgiving multi-object atomicity to nested data structures.Unification of capabilities and traits allows trait-based reuseacross multiple concurrency scenarios with minimal codeduplication. The resulting system brings together features from awide range of prior work in a unified way.

    Series
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2016-007
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-309774 (URN)
    Projects
    Structured AliasingUpscaleUPMARC
    Available from: 2016-12-07 Created: 2016-12-07 Last updated: 2018-01-13Bibliographically approved
    2. Kappa: Insights, Current Status and Future Work
    Open this publication in new window or tab >>Kappa: Insights, Current Status and Future Work