uu.seUppsala University Publications
Change search
Refine search result
12 1 - 50 of 77
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Adwent, Ann-Kristin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Shared IT Service Center i kommuner2012Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    To be able to maintain an adequate IT

    service for various users and needs,

    more municipalities are looking at the

    possibility for mergers of local IT

    departments. One solution for merging

    multiple services/functions and creating

    synergy opportunities is called Shared

    Service Center (SSC). The concept of SSC

    is that the administrative service

    itself becomes a core activity within

    the organization. The aim of this thesis

    is to provide municipalities who are

    considering a merging of their local IT

    departments with recommendations on how

    to design the Shared IT Service Center.

    Recommendations are outlined based on an

    analysis of IT-management literature,

    reports and by conducting a study on

    three ongoing collaborations.

    The conclusions drawn from the study

    suggest guidelines for the design of a

    Shared IT Service Center for

    municipalities. These are as following:

    Create a vision associated with a

    specific and structured target state.

    Identify needs for different target

    groups in municipalities and set a

    common standard.

    Create a clear and practical model/SLA

    appearance of the cooperation and

    agreement.

    Ensure the individual municipalities

    commissioning skills in order to not

    lose it in the context of a common IT

    operation.

    Find an organization that is democratic

    for member municipalities and

    facilitates long-term partnership.

    Specify the operation and maintenance so

    that it can be regulated and controlled

    Establish a common help desk.

    Establish a common standard and

    consolidated infrastructure before the

    introduction of a common technology platform.

  • 2.
    Andrejev, Andrej
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Semantic Web Queries over Scientific Data2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Semantic Web and Linked Open Data provide a potential platform for interoperability of scientific data, offering a flexible model for providing machine-readable and queryable metadata. However, RDF and SPARQL gained limited adoption within the scientific community, mainly due to the lack of support for managing massive numeric data, along with certain other important features – such as extensibility with user-defined functions, query modularity, and integration with existing environments and workflows.

    We present the design, implementation and evaluation of Scientific SPARQL – a language for querying data and metadata combined, represented using the RDF graph model extended with numeric multidimensional arrays as node values – RDF with Arrays. The techniques used to store RDF with Arrays in a scalable way and process Scientific SPARQL queries and updates are implemented in our prototype software – Scientific SPARQL Database Manager, SSDM, and its integrations with data storage systems and computational frameworks. This includes scalable storage solutions for numeric multidimensional arrays and an efficient implementation of array operations. The arrays can be physically stored in a variety of external storage systems, including files, relational databases, and specialized array data stores, using our Array Storage Extensibility Interface. Whenever possible SSDM accumulates array operations and accesses array contents in a lazy fashion.

    In scientific applications numeric computations are often used for filtering or post-processing the retrieved data, which can be expressed in a functional way. Scientific SPARQL allows expressing common query sub-tasks with functions defined as parameterized queries. This becomes especially useful along with functional language abstractions such as lexical closures and second-order functions, e.g. array mappers.

    Existing computational libraries can be interfaced and invoked from Scientific SPARQL queries as foreign functions. Cost estimates and alternative evaluation directions may be specified, aiding the construction of better execution plans. Costly array processing, e.g. filtering and aggregation, is thus preformed on the server, saving the amount of communication. Furthermore, common supported operations are delegated to the array storage back-ends, according to their capabilities. Both expressivity and performance of Scientific SPARQL are evaluated on a real-world example, and further performance tests are run using our mini-benchmark for array queries.

  • 3.
    Aronis, Stavros
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Effective Techniques for Stateless Model Checking2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Stateless model checking is a technique for testing and verifying concurrent programs, based on exploring the different ways in which operations executed by the processes of a concurrent program can be scheduled. The goal of the technique is to expose all behaviours that can be a result of scheduling non-determinism. As the number of possible schedulings is huge, however, techniques that reduce the number of schedulings that must be explored to achieve verification have been developed. Dynamic partial order reduction (DPOR) is a prominent such technique.

    This dissertation presents a number of improvements to dynamic partial order reduction that significantly increase the effectiveness of stateless model checking. Central among these improvements are the Source and Optimal DPOR algorithms (and the theoretical framework behind them) and a technique that allows the observability of the interference of operations to be used in dynamic partial order reduction. Each of these techniques can exponentially decrease the number of schedulings that need to be explored to verify a concurrent program. The dissertation also presents a simple bounding technique that is compatible with DPOR algorithms and effective for finding bugs in concurrent programs, if the number of schedulings is too big to make full verification possible in a reasonable amount of time, even when the improved algorithms are used.

    All improvements have been implemented in Concuerror, a tool for applying stateless model checking to Erlang programs. In order to increase the effectiveness of the tool, the interference of the high-level operations of the Erlang/OTP implementation is examined, classified and precisely characterized. Aspects of the implementation of the tool are also described. Finally, a use case is presented, showing how Concuerror was used to find bugs and verify key correctness properties in repair techniques for the CORFU chain replication protocol.

    List of papers
    1. Source Sets: A Foundation for Optimal Dynamic Partial Order Reduction
    Open this publication in new window or tab >>Source Sets: A Foundation for Optimal Dynamic Partial Order Reduction
    2017 (English)In: Journal of the ACM, ISSN 0004-5411, E-ISSN 1557-735X, Vol. 64, no 4, article id 25Article in journal (Refereed) Published
    Abstract [en]

    Stateless model checking is a powerful method for program verification that, however, suffers from an exponential growth in the number of explored executions. A successful technique for reducing this number, while still maintaining complete coverage, is Dynamic Partial Order Reduction (DPOR), an algorithm originally introduced by Flanagan and Godefroid in 2005 and since then not only used as a point of reference but also extended by various researchers. In this article, we present a new DPOR algorithm, which is the first to be provably optimal in that it always explores the minimal number of executions. It is based on a novel class of sets, called source sets, that replace the role of persistent sets in previous algorithms. We begin by showing how to modify the original DPOR algorithm to work with source sets, resulting in an efficient and simple-to-implement algorithm, called source-DPOR. Subsequently, we enhance this algorithm with a novel mechanism, called wakeup trees, that allows the resulting algorithm, called optimal-DPOR, to achieve optimality. Both algorithms are then extended to computational models where processes may disable each other, for example, via locks. Finally, we discuss tradeoffs of the source-and optimal-DPOR algorithm and present programs that illustrate significant time and space performance differences between them. We have implemented both algorithms in a publicly available stateless model checking tool for Erlang programs, while the source-DPOR algorithm is at the core of a publicly available stateless model checking tool for C/pthread programs running on machines with relaxed memory models. Experiments show that source sets significantly increase the performance of stateless model checking compared to using the original DPOR algorithm and that wakeup trees incur only a small overhead in both time and space in practice.

    Place, publisher, year, edition, pages
    Association for Computing Machinery (ACM), 2017
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:uu:diva-331842 (URN)10.1145/3073408 (DOI)000410615900002 ()
    Projects
    UPMARCRELEASE
    Funder
    EU, FP7, Seventh Framework Programme, 287510Swedish Research Council
    Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2018-01-13Bibliographically approved
    2. The shared-memory interferences of Erlang/OTP built-ins
    Open this publication in new window or tab >>The shared-memory interferences of Erlang/OTP built-ins
    2017 (English)In: Proceedings Of The 16Th Acm Sigplan International Workshop On Erlang (Erlang '17) / [ed] Chechina, N.; Fritchie, SL., New York: Association for Computing Machinery (ACM), 2017, p. 43-54Conference paper, Published paper (Refereed)
    Abstract [en]

    Erlang is a concurrent functional language based on the actor modelof concurrency. In the purest form of this model, actors are realizedby processes that do not share memory and communicate witheach other exclusively via message passing. Erlang comes quiteclose to this model, as message passing is the primary form of interprocesscommunication and each process has its own memoryarea that is managed by the process itself. For this reason, Erlangis often referred to as implementing “shared nothing” concurrency.Although this is a convenient abstraction, in reality Erlang’s mainimplementation, the Erlang/OTP system, comes with a large numberof built-in operations that access memory which is shared byprocesses. In this paper, we categorize these built-ins, and characterizethe interferences between them that can result in observabledifferences of program behaviour when these built-ins are usedin a concurrent setting. The paper is complemented by a publiclyavailable suite of more than one hundred small Erlang programsthat demonstrate the racing behaviour of these built-ins.

    Place, publisher, year, edition, pages
    New York: Association for Computing Machinery (ACM), 2017
    Keywords
    Actors, BEAM, Concuerror, Erlang, Scheduling nondeterminism
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:uu:diva-331840 (URN)10.1145/3123569.3123573 (DOI)000426922100005 ()978-1-4503-5179-9 (ISBN)
    Conference
    16th ACM SIGPLAN International Workshop on Erlang (Erlang), Sep 08, 2017 , Oxford, England.
    Projects
    UPMARC
    Funder
    Swedish Research Council
    Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2018-07-03Bibliographically approved
    3. Testing And Verifying Chain Repair Methods For CORFU Using Stateless Model Checking
    Open this publication in new window or tab >>Testing And Verifying Chain Repair Methods For CORFU Using Stateless Model Checking
    2017 (English)Conference paper, Published paper (Refereed)
    Abstract [en]

    Corfu is a distributed shared log that is designed to be scalable and reliable in the presence of failures and asynchrony. Internally, Corfu is fully replicated for fault tolerance, without sharding data or sacrificing strong consistency. In this case study, we present the modeling approaches we followed to test and verify, using Concuerror, the correctness of repair methods for the Chain Replication protocol suitable for Corfu. In the first two methods we tried, Concuerror located bugs quite fast. In contrast, the tool did not manage to find bugs in the third method, but the time this took also motivated an improvement in the tool that reduces the number of traces explored. Besides more details about all the above, we present experiences and lessons learned from applying stateless model checking for verifying complex protocols suitable for distributed programming.

    Place, publisher, year, edition, pages
    Cham: Springer, 2017
    Series
    Lecture Notes in Computer Science ; 10510
    National Category
    Computer Systems Software Engineering
    Identifiers
    urn:nbn:se:uu:diva-331836 (URN)10.1007/978-3-319-66845-1_15 (DOI)978-3-319-66844-4 (ISBN)978-3-319-66845-1 (ISBN)
    Conference
    Integrated Formal Methods. IFM 2017
    Projects
    UPMARC
    Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2018-01-13Bibliographically approved
    4. Optimal Dynamic Partial Order Reduction with Observers
    Open this publication in new window or tab >>Optimal Dynamic Partial Order Reduction with Observers
    (English)Manuscript (preprint) (Other academic)
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-333508 (URN)
    Projects
    UPMARC
    Available from: 2017-11-21 Created: 2017-11-21 Last updated: 2018-01-13
  • 4.
    Astrid, Berghult
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    A practical comparison between algebraic and statistical attacks on the lightweight cipher SIMON2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the summer of 2013 NSA released a new family of lightweight block ciphers called SIMON. However they did not publish any assessment of the security of SIMON. Since then only a few papers on this topic have been released and none of them have included an algebraic analysis. Moreover only one paper described a practical implementation of the attack. This master thesis aims to implement a practical attack, both algebraic and differential, on SIMON. In doing so we are able to make a comparison between the two different attack methods. The algebraic attack was executed with SAT-solver CryptoMiniSat2 and could break 7 rounds. The differential attack was implemented in three steps. First we created a difference distribution table (DDT) and then we identified a differential by a search algorithm for the DDT. In the last step we designed a key recovery attack to recover the last round key. The attack could break 13 rounds for a 9 round differential. With a simple expansion on the key recovery attack it has the potential to break even more rounds for the same 9 round differential. This indicate that algebraic cryptanalysis might not be such a strong tool since it could only break 7 rounds. Furthermore, if a generic algebraic attack does not work on SIMON it has little or no chance of being successful on a more complex cipher. In other words this algebraic attack may serve as a benchmark for the efficiency of generic algebraic attacks.

  • 5.
    Axel, Lindegren
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Partitioning temporal networks: A study of finding the optimal partition of temporal networks using community detection2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Many of the algorithms used for community detection in temporal networks have been adapted from static network theory. A common approach in dealing with the temporal dimension is to create multiple static networks from one temporal, based on a time condition. In this thesis, focus lies on identifying the optimal partitioning of a few temporal networks. This is done by utilizing the popular community detection algorithm called Generalized Louvain. Output of the Generalized Louvain comes in two parts. First, the created community structure, i.e. how the network is connected. Secondly, a measure called modularity, which is a scalar value representing the quality of the identified community structure. The methodology used is aimed at creating a comparable result by normalizing modularity. The normalization process can be explained in two major steps: 1) study the effects on modularity when partitioning a temporal network in an increasing number of slices. 2) study the effects on modularity when varying the number of connections (edges) in each time slice. The results show that the created methodology yields comparable results on two out of the four here tested temporal networks, implying that it might be more suited for some networks than others. This can serve as an indication that there does not exist a general model for community detection in temporal networks. Instead, the type of network is key to choosing the method.

  • 6.
    Aziz, Yama
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Exploring a keyword driven testing framework: a case study at Scania IT2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this thesis is to investigate organizational quality assurance through the international testing standard ISO 29119. The focus will be on how an organization carries out testing processes and designs and implements test cases. Keyword driven testing is a test composition concept in ISO 29119 and suitable for automation. This thesis will answer how keyword driven testing can facilitate the development of maintainable test cases and support test automation in an agile organization.

    The methodology used was a qualitative case study including semi-structured interviews and focus groups with agile business units within Scania IT. Among the interview participants were developers, test engineers, scrum masters and a unit manager.

    The results describe testing practices carried out in several agile business units, maintainability issues with test automation and general ideas of how test automation should be approached. Common issues with test automation were test cases failing due to changed test inputs, inexperience with test automation frameworks and lack of resources due to project release cycle.

    This thesis concludes that keyword driven testing has the potential of solving several maintainability issues with test cases breaking. However, the practicality and effectiveness of said potential remain unanswered. Moreover, successfully developing an automated keyword driven testing framework requires integration with existing test automation tools and considering the agile organizational circumstances.

  • 7.
    Backlund, Ludvig
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    A technical overview of distributed ledger technologies in the Nordic capital market.2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis examines how Distributed Ledger Technologies (DLTs) could be utilized in capital markets in general and in the Nordic capital market in particular. DLTs were introduced with the so called cryptocurrency Bitcoin in 2009 and has in the last few years been of interest to various financial institutions as a means to streamline financial processes. By combining computer scientific concepts such as public-key cryptography and consensus algorithms DLTs make it possible to keep shared databases with limited trust among the participators and without the use of a trusted third party. In this thesis various actors on the Nordic capital market were interviewed and their stance on DLTs were summarized. In addition to this a Proof of Concept of a permissioned DLT application for ownership registration of securities was constructed. It was found that all the interviewees were generally optimistic about DLTs potential to increase the efficiency of capital markets. The technology needs to be adopted to handle the capital markets demand for privacy and large transaction volumes, but there is a general agreement among the interviewees that these issues will be solved. The biggest challenge for an adoption of DLTs seem to lie in that of finding a common industry-wide standard.

  • 8.
    Badiozamany, Sobhan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Real-time data stream clustering over sliding windows2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In many applications, e.g. urban traffic monitoring, stock trading, and industrial sensor data monitoring, clustering algorithms are applied on data streams in real-time to find current patterns. Here, sliding windows are commonly used as they capture concept drift.

    Real-time clustering over sliding windows is early detection of continuously evolving clusters as soon as they occur in the stream, which requires efficient maintenance of cluster memberships that change as windows slide.

    Data stream management systems (DSMSs) provide high-level query languages for searching and analyzing streaming data. In this thesis we extend a DSMS with a real-time data stream clustering framework called Generic 2-phase Continuous Summarization framework (G2CS).  G2CS modularizes data stream clustering by taking as input clustering algorithms which are expressed in terms of a number of functions and indexing structures. G2CS supports real-time clustering by efficient window sliding mechanism and algorithm transparent indexing. A particular challenge for real-time detection of a high number of rapidly evolving clusters is efficiency of window slides for clustering algorithms where deletion of expired data is not supported, e.g. BIRCH. To that end, G2CS includes a novel window maintenance mechanism called Sliding Binary Merge (SBM). To further improve real-time sliding performance, G2CS uses generation-based multi-dimensional indexing where indexing structures suitable for the clustering algorithms can be plugged-in.

    List of papers
    1. Scalable ordered indexing of streaming data
    Open this publication in new window or tab >>Scalable ordered indexing of streaming data
    2012 (English)In: 3rd International Workshop on Accelerating Data Management Systems using Modern Processor and Storage Architectures, 2012, p. 11-Conference paper, Published paper (Refereed)
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-185068 (URN)
    Conference
    ADMS 2012, Istanbul, Turkey
    Projects
    eSSENCE
    Available from: 2012-08-27 Created: 2012-11-19 Last updated: 2018-01-12Bibliographically approved
    2. Grand challenge: Implementation by frequently emitting parallel windows and user-defined aggregate functions
    Open this publication in new window or tab >>Grand challenge: Implementation by frequently emitting parallel windows and user-defined aggregate functions
    Show others...
    2013 (English)In: Proc. 7th ACM International Conference on Distributed Event-Based Systems, New York: ACM Press, 2013, p. 325-330Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    New York: ACM Press, 2013
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-211954 (URN)10.1145/2488222.2488284 (DOI)978-1-4503-1758-0 (ISBN)
    External cooperation:
    Conference
    DEBS 2013
    Available from: 2013-06-29 Created: 2013-12-03 Last updated: 2018-01-11Bibliographically approved
    3. Distributed multi-query optimization of continuous clustering queries
    Open this publication in new window or tab >>Distributed multi-query optimization of continuous clustering queries
    2014 (English)In: Proc. VLDB 2014 PhD Workshop, 2014Conference paper, Published paper (Refereed)
    Abstract [en]

    This work addresses the problem of sharing execution plans for queries that continuously cluster streaming data to provide an evolving summary of the data stream. This is challenging since clustering is an expensive task, there might be many clustering queries running simultaneously, each continuous query has a long life time span, and the execution plans often overlap. Clustering is similar to conventional grouped aggregation but cluster formation is more expensive than group formation, which makes incremental maintenance more challenging. The goal of this work is to minimize response time of continuous clustering queries with limited resources through multi-query optimization. To that end, strategies for sharing execution plans between continuous clustering queries are investigated and the architecture of a system is outlined that optimizes the processing of multiple such queries. Since there are many clustering algorithms, the system should be extensible to easily incorporate user defined clustering algorithms.

    National Category
    Computer Sciences
    Research subject
    Computer Science with specialization in Database Technology
    Identifiers
    urn:nbn:se:uu:diva-302790 (URN)
    External cooperation:
    Conference
    VLDB 2014
    Available from: 2016-09-09 Created: 2016-09-09 Last updated: 2018-01-10Bibliographically approved
    4. Framework for real-time clustering over sliding windows
    Open this publication in new window or tab >>Framework for real-time clustering over sliding windows
    2016 (English)In: Proc. 28th International Conference on Scientific and Statistical Database Management, New York: ACM Press, 2016, p. 1-13, article id 19Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    New York: ACM Press, 2016
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-302792 (URN)10.1145/2949689.2949696 (DOI)978-1-4503-4215-5 (ISBN)
    External cooperation:
    Conference
    SSDBM 2016
    Available from: 2016-07-18 Created: 2016-09-09 Last updated: 2018-01-10Bibliographically approved
  • 9.
    Bergquist, Jonatan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Blockchain Technology and Smart Contracts: Privacy-preserving Tools2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this Master's thesis is to explore blockchain technology and smart contracts as a way of building privacy-sensitive applications. The main focus is on a medication plan containing prescriptions, built on a blockchain system of smart contracts. This is an example use case, but the results can be transferred to other ones where sensitive data is being shared and a proof of validity or authentication is needed. First the problem is presented, why medication plans are in need of digitalisation and why blockchain technology is a fitting technology for implementing such an application. Then blockchain technology is explained, since it is a very new and relatively unfamiliar IT construct. Thereafter, a design is proposed for solving the problem. A system of smart contracts was built to prove how such an application can be built, and suggested guidelines for how a blockchain system should be designed to fulfil the requirements that were defined. Finally, a discussion is held regarding the applicability of different blockchain designs to the problem of privacy-handling applications.

  • 10.
    Bernström, Kristian
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Näsman, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Utredning och implementering av en prototyp för integration av Prevas FOCS och ABB 800xA2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    ABB and Prevas have initiated a collaboration to sell a system to optimize steel industry furnaces, called FOCS. The purpose of this thesis is to investigate possibilities for integrating Prevas FOCS and ABB 800xA.

    The result of the investigation is used for an implementation of a prototype of the integrated system. The study shows a general method that can be used when implementing two software systems. The prototype of the integrated systems is made with usability principles in mind. This is a very important aspect in order to create a good working environment for the operators of a steel plant. It is also important to follow communication standards when integrating software systems. In an industrial network used in the steel industry OPC is a standard for communication. We recommend ABB and Prevas to follow this standard when possible to make the integration smoother. To keep the cost of the integration to a minimum it is also recommended to reuse existing resources. This can however have a negative effect on usability and it is therefore important to keep a balance between cost and usability.

    The prototype made in this thesis accomplishes the goal of transferring the functionalities used by operators of Prevas FOCS to 800xA so that operators can control the processes using only one integrated system.

  • 11.
    Björdal, Gustav
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    String Variables for Constraint-Based Local Search2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    String variables occur as a natural part of many computationally challenging problems. Usually, such problems are solved using problem-specific algorithms implemented from first principles, which can be a time-consuming and error-prone task. A constraint solver is a framework that can be used to solve computationally challenging problems by first declaratively defining the problem and then solving it using specialised off-the-shelf algorithms, which can cut down development time significantly and result in faster solution times and higher solution quality. There are many constraint solving technologies, one of which is constraint-based local search (CBLS). However, very few constraint solvers have native support for solving problems with string variables. The goal of this thesis is to add string variables as a native type to the CBLS solver OscaR/CBLS. The implementation was experimentally evaluated on the Closest String Problem and the Word Equation System problem. The evaluation shows that string variables for CBLS can be a viable option for solving string problems. However, further work is required to obtain even more competitive performance.

  • 12.
    Björklund, Henrik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Combinatorial Optimization for Infinite Games on Graphs2005Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Games on graphs have become an indispensable tool in modern computer science. They provide powerful and expressive models for numerous phenomena and are extensively used in computer- aided verification, automata theory, logic, complexity theory, computational biology, etc.

    The infinite games on finite graphs we study in this thesis have their primary applications in verification, but are also of fundamental importance from the complexity-theoretic point of view. They include parity, mean payoff, and simple stochastic games.

    We focus on solving graph games by using iterative strategy improvement and methods from linear programming and combinatorial optimization. To this end we consider old strategy evaluation functions, construct new ones, and show how all of them, due to their structural similarities, fit into a unifying combinatorial framework. This allows us to employ randomized optimization methods from combinatorial linear programming to solve the games in expected subexponential time.

    We introduce and study the concept of a controlled optimization problem, capturing the essential features of many graph games, and provide sufficent conditions for solvability of such problems in expected subexponential time.

    The discrete strategy evaluation function for mean payoff games we derive from the new controlled longest-shortest path problem, leads to improvement algorithms that are considerably more efficient than the previously known ones, and also improves the efficiency of algorithms for parity games.

    We also define the controlled linear programming problem, and show how the games are translated into this setting. Subclasses of the problem, more general than the games considered, are shown to belong to NP intersection coNP, or even to be solvable by subexponential algorithms.

    Finally, we take the first steps in investigating the fixed-parameter complexity of parity, Rabin, Streett, and Muller games.

    List of papers
    1. A discrete subexponential algorithm for parity games
    Open this publication in new window or tab >>A discrete subexponential algorithm for parity games
    2003 In: STACS 2003, 20th Annual Symposium on Theoretical Aspects of Computer Science, 2003, p. 663-674Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92509 (URN)3-540-00623-0 (ISBN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    2. Complexity of model checking by iterative improvement: the pseudo-Boolean framework
    Open this publication in new window or tab >>Complexity of model checking by iterative improvement: the pseudo-Boolean framework
    2003 In: Perspectives of Systems Informatics: 5th International Andrei Ershov Memorial Conference, 2003, p. 381-394Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92510 (URN)3-540-20813-5 (ISBN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    3. Memoryless determinacy of parity and mean payoff games: a simple proof
    Open this publication in new window or tab >>Memoryless determinacy of parity and mean payoff games: a simple proof
    2004 In: Theoretical Computer Science, ISSN 0304-3975, Vol. 310, p. 365-378Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-92511 (URN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    4. A combinatorial strongly subexponential algorithm for mean payoff games
    Open this publication in new window or tab >>A combinatorial strongly subexponential algorithm for mean payoff games
    2004 In: Mathematical Foundations of Computer Science 2004, 2004, p. 673-685Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92512 (URN)3-540-22823-3 (ISBN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    5. The controlled linear programming problem: DIMACS Technical Report 2004-41, September 2004
    Open this publication in new window or tab >>The controlled linear programming problem: DIMACS Technical Report 2004-41, September 2004
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:uu:diva-92513 (URN)
    Available from: 2005-01-18 Created: 2005-01-18 Last updated: 2010-01-13Bibliographically approved
  • 13.
    Brandauer, Stephan
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Castegren, Elias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Wrigstad, Tobias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    C♭: A New Modular Approach to Implementing Efficient and Tunable Collections2018In: Proceedings of the 2018 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward! 2018), ACM , 2018, p. 57-71Conference paper (Refereed)
  • 14.
    Bylund, Markus
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Personal service environments: Openness and user control in user-service interaction2001Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis describes my work with making the whole experience of using electronic services more pleasant and practical. More and more people use electronic services in their daily life — be it services for communicating with colleagues or family members, web-based bookstores, or network-based games for entertainment. However, electronic services in general are more difficult to use than they would have to be. They are limited in how and when users can access them. Services do not collaborate despite obvious advantages to their users, and they put the integrity and privacy of their users at risk.

    In this thesis, I argue that there are structural reasons for these problems rather than problems with content or the technology per se. The focus when designing electronic services tends to be on the service providers or on the artifacts that are used for accessing the services. I present an approach that focus on the user instead, which is based on the concept of personal service environments. These provide a mobile locale for storing and running electronic services of individual users. This gives the user increased control over which services to use, from where they can be accessed, and what personal information that services gather. The concept allows, and encourages, service collaboration, but not without letting the user maintain the control over the process. Finally, personal service environments allow continuous usage of services while switching between interaction devices and moving between places.

    The sView system, which is also described, implements personal service environments and serves as an example of how the concept can be realized. The system consists of two parts. The first part is a specification of how both services for sView and infrastructure for handling services should be developed. The second part is a reference implementation of the specification, which includes sample services that adds to and demonstrates the functionality of sView.

  • 15.
    Byström, Adam
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    From Intent to Code: Using Natural Language Processing2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Programming and the possibility to express one’s intent to a machine is becoming a very important skill in our digitalizing society. Today, instructing a machine, such as a computer to perform actions is done through programming. What if this could be done with human language? This thesis examines how new technologies and methods in the form of Natural Language Processing can be used to make programming more accessible by translating intent expressed in natural language into code that a computer can execute. Related research has studied using natural language as a programming language and using natural language to instruct robots. These studies have shown promising results but are hindered by strict syntaxes, limited domains and inability to handle ambiguity. Studies have also been made using Natural Language Processing to analyse source code, turning code into natural language. This thesis has the reversed approach. By utilizing Natural Language Processing techniques, an intent can be translated into code containing concepts such as sequential execution, loops and conditional statements. In this study, a system for converting intent, expressed in English sentences, into code is developed. To analyse this approach to programming, an evaluation framework is developed, evaluating the system during the development process as well as usage of the final system. The results show that this way of programming might have potential but conclude that the Natural Language Processing models still have too low accuracy. Further research is required to increase this accuracy to further assess the potential of this way of programming. 

  • 16.
    Carlsson, Per
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Algorithms for Electronic Power Markets2004Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis we focus resource allocation problems and electronic markets in particular. The main application area of ours is electricity markets. We present a number of algorithms and include practical experience.

    There is an ongoing restructuring of power markets in Europe and elsewhere, this implies that an industry that previously has been viewed as a natural monopoly becomes exposed to competition. In the thesis we move a step further suggesting that end users should take active part in the trade on power markets such as (i) day-ahead markets and (ii) markets handling close to real-time balancing of power grids. Our ideas and results can be utilised (a) to increase the efficiency of these markets and (b) to handle strained situations when power systems operate at their limits. For this we utilise information and communication technology available today and develop electronic market mechanisms designed for large numbers of participants typically distributed over a power grid.

    The papers of the thesis cover resource allocation with separable objective functions, a market mechanism that accepts actors with discontinuous demand, and mechanisms that allow actors to express combinatorial dependencies between traded commodities on multi-commodity markets. Further we present results from field tests and simulations.

    List of papers
    1. Resource Allocation With Wobbly Functions
    Open this publication in new window or tab >>Resource Allocation With Wobbly Functions
    2002 In: Computational Optimization and Applications, ISSN 0926-6003, Vol. 23, no 2, p. 171-200Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-92381 (URN)
    Available from: 2004-11-22 Created: 2004-11-22Bibliographically approved
    2. Extending Equilibrium Markets
    Open this publication in new window or tab >>Extending Equilibrium Markets
    2001 In: IEEE Intelligent Systems, ISSN 1094-7167, Vol. 16, no 4, p. 18-26Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-92382 (URN)
    Available from: 2004-11-22 Created: 2004-11-22Bibliographically approved
    3. Communication Test of Electronic Power Markets through Power Line Communication
    Open this publication in new window or tab >>Communication Test of Electronic Power Markets through Power Line Communication
    Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92383 (URN)
    Available from: 2004-11-22 Created: 2004-11-22Bibliographically approved
    4. A Tractable Mechanism for Time Dependent Markets
    Open this publication in new window or tab >>A Tractable Mechanism for Time Dependent Markets
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:uu:diva-92384 (URN)
    Available from: 2004-11-22 Created: 2004-11-22 Last updated: 2010-01-13Bibliographically approved
    5. A Flexible Model for Tree-Structured Multi-Commodity Markets
    Open this publication in new window or tab >>A Flexible Model for Tree-Structured Multi-Commodity Markets
    2007 (English)In: Electronic Commerce Research, ISSN 1389-5753, E-ISSN 1572-9362, Vol. 7, no 1, p. 69-88Article in journal (Refereed) Published
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-92385 (URN)10.1007/s10660-006-0063-Y (DOI)
    Available from: 2004-11-22 Created: 2004-11-22 Last updated: 2018-01-13Bibliographically approved
    6. Market Simulations
    Open this publication in new window or tab >>Market Simulations
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:uu:diva-92386 (URN)
    Available from: 2004-11-22 Created: 2004-11-22 Last updated: 2010-01-13Bibliographically approved
  • 17.
    Carlsson, Per
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Market and resource allocation algorithms with application to energy control2001Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    The energy markets of today are markets with rather few active participants. The participants are, with few exceptions, large producers and distributors. The market mechanisms that are used are constructed with this kind of a market situation in mind. With an automatic or semiautomatic approach, the market mechanism would be able to incorporate a larger number of participants. Smaller producers, and even consumers, could take an active part in the market. The gain is in more efficient markets, and – due to smaller fluctuations in demand – better resource usage from an environmental perspective.

    The energy markets of the Nordic countries (as well as many others) were deregulated during the last few years. The change has been radical and the situation is still rather new. We believe that the market can be made more efficient with the help of the dynamics of the small actors.

    The idealised world of theory (of economics) often relies on assumptions such as continuous demand and supply curves. These assumptions are useful, and they do not introduce problems in the power market situation of today, with relatively few, large, participants. When consumers and small producers are introduced on the market, the situation is different. Then it is a drawback if the market mechanims cannot handle discontinuous supply and demand.

    The growth in accessibility to computational power and data communications that we have experienced in the last years (and are experiencing) could be utilised when constructing mechanisms for the energy markets of tomorrow.

    In this thesis we suggest a new market mechanism, ConFAst, that utilises the technological progress to make it possible to incorporate a large number of active participants on the market. The mechanism does not rely on the assumptions above. The gain is a more efficient market with less fluctuations in demand over the day.

    To make this possible there is a need for efficient algorithms, in particular this mechanism relies on an efficient aggregation algorithm. An algorithm for aggregation of objective functions is part of this thesis. The algorithm handles maximisation with nonconcave, even noisy, objective functions. Experimental results show that the approach, in practically relevant cases, is significantly faster than the standard algorithm.

  • 18.
    Castegren, Elias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Capability-Based Type Systems for Concurrency Control2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Since the early 2000s, in order to keep up with the performance predictions of Moore's law, hardware vendors have had to turn to multi-core computers. Today, parallel hardware is everywhere, from massive server halls to the phones in our pockets. However, this parallelism does not come for free. Programs must explicitly be written to allow for concurrent execution, which adds complexity that is not present in sequential programs. In particular, if two concurrent processes share the same memory, care must be taken so that they do not overwrite each other's data. This issue of data-races is exacerbated in object-oriented languages, where shared memory in the form of aliasing is ubiquitous. Unfortunately, most mainstream programming languages were designed with sequential programming in mind, and therefore provide little or no support for handling this complexity. Even though programming abstractions like locks can be used to synchronise accesses to shared memory, the burden of using these abstractions correctly and efficiently is left to the programmer.

    The contribution of this thesis is programming language technology for controlling concurrency in the presence of shared memory. It is based on the concept of reference capabilities, which facilitate safe concurrent programming by restricting how memory may be accessed and shared. Reference capabilities can be used to enforce correct synchronisation when accessing shared memory, as well as to prevent unsafe sharing when using more fine-grained concurrency control, such as lock-free programming. This thesis presents the design of a capability-based type system with low annotation overhead, that can statically guarantee the absence of data-races without giving up object-oriented features like aliasing, subtyping and code reuse. The type system is formally proven safe, and has been implemented for the highly concurrent object-oriented programming language Encore.

    List of papers
    1. Reference Capabilities for Trait Based Reuse and Concurrency Control
    Open this publication in new window or tab >>Reference Capabilities for Trait Based Reuse and Concurrency Control
    2016 (English)Report (Other academic)
    Abstract [en]

    The proliferation of shared mutable state in object-orientedprogramming complicates software development as two seeminglyunrelated operations may interact via an alias and produceunexpected results. In concurrent programming this manifestsitself as data-races.

    Concurrent object-oriented programming further suffers from thefact that code that warrants synchronisation cannot easily bedistinguished from code that does not. The burden is placed solelyon the programmer to reason about alias freedom, sharing acrossthreads and side-effects to deduce where and when to applyconcurrency control, without inadvertently blocking parallelism.

    This paper presents a reference capability approach to concurrentand parallel object-oriented programming where all uses of aliasesare guaranteed to be data-race free. The static type of an aliasdescribes its possible sharing without using explicit ownership oreffect annotations. Type information can express non-interferingdeterministic parallelism without dynamic concurrency control,thread-locality, lock-based schemes, and guarded-by relationsgiving multi-object atomicity to nested data structures.Unification of capabilities and traits allows trait-based reuseacross multiple concurrency scenarios with minimal codeduplication. The resulting system brings together features from awide range of prior work in a unified way.

    Series
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2016-007
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-309774 (URN)
    Projects
    Structured AliasingUpscaleUPMARC
    Available from: 2016-12-07 Created: 2016-12-07 Last updated: 2018-01-13Bibliographically approved
    2. Kappa: Insights, Current Status and Future Work
    Open this publication in new window or tab >>Kappa: Insights, Current Status and Future Work
    2016 (English)Conference paper, Oral presentation with published abstract (Refereed)
    Abstract [en]

    KAPPA is a type system for safe concurrent object-oriented program- ming using reference capabilities. It uses a combination of static and dynamic techniques to guarantee data-race freedom, and, for a certain subset of the system, non-interference (and thereby determin- istic parallelism). It combines many features from previous work on alias management, such as substructural types, regions, ownership types, and fractional permissions, and brings them together using a unified set of primitives.

    In this extended abstract we show how KAPPA’s capabilities express variations of the aforementioned concepts, discuss the main insights from working with KAPPA, present the current status of the implementation of KAPPA in the context of the actor language Encore, and discuss ongoing and future work. 

    Keywords
    Type systems, Language Implementation, Capabilities, Traits, Concurrency, Object-Oriented
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-307137 (URN)
    Conference
    IWACO
    Projects
    Structured AliasingUpscaleUPMARC
    Funder
    EU, FP7, Seventh Framework Programme, FP7-612985
    Available from: 2016-12-07 Created: 2016-11-09 Last updated: 2018-01-13Bibliographically approved
    3. Types for CAS: Relaxed Linearity with Ownership Transfer
    Open this publication in new window or tab >>Types for CAS: Relaxed Linearity with Ownership Transfer
    2017 (English)In: Article in journal (Other academic) Submitted
    Abstract [en]

    Linear references are guaranteed to be free from aliases. This is a strong property that simplifies reasoning about programs and enables powerful optimisations, but it is also a property that is too strong for many applications. Notably, lock-free algorithms, which implement protocols that ensure safe, non-blocking concurrent access to data structures, are generally not typable with linear references because they rely on aliasing to achieve lock-freedom.

    This paper presents LOLCAT, a type system with a relaxed notion of linearity that allows an unbounded number of aliases to an object as long as at most one alias at a time owns the right to access the contents of the object. This ownership can be transferred between aliases, but can never be duplicated. LOLCAT types are powerful enough to type several lock-free data structures and give a compile-time guarantee of absence of data-races when accessing owned data. In particular, LOLCAT is able to assign types to the CAS (compare and swap) primitive that precisely describe how ownership is transferred across aliases, possibly across different threads.

    The paper introduces LOLCAT through a sound core procedural calculus, and shows how LOLCAT can be applied to three fundamental lock-free data structures. It also shows how LOLCAT can be used to implement synchronisation primitives like locks, and discusses a prototype implementation which integrates LOLCAT with an object-oriented programming language.

    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-336019 (URN)
    Available from: 2017-12-11 Created: 2017-12-11 Last updated: 2018-01-13
    4. Bestow and Atomic: Concurrent programming using isolation, delegation and grouping
    Open this publication in new window or tab >>Bestow and Atomic: Concurrent programming using isolation, delegation and grouping
    2018 (English)In: The Journal of logical and algebraic methods in programming, ISSN 2352-2208, E-ISSN 2352-2216, Vol. 100, p. 130-151Article in journal (Refereed) Published
    Abstract [en]

    Any non-trivial concurrent system warrants synchronisation, regardless of the concurrency model. Actor-based concurrency serialises all computations in an actor through asynchronous message passing. In contrast, lock-based concurrency serialises some computations by following a lock-unlock protocol for accessing certain data. Both systems require sound reasoning about pointers and aliasing to exclude data-races. If actor isolation is broken, so is the single-thread-of-control abstraction. Similarly for locks, if a datum is accessible outside of the scope of the lock, the datum is not governed by the lock. In this paper we discuss how to balance aliasing and synchronisation. In previous work, we defined a type system that guarantees data-race freedom of actor-based concurrency and lock-based concurrency. This paper extends this work by the introduction of two programming constructs; one for decoupling isolation and synchronisation and one for constructing higher-level atomicity guarantees from lower-level synchronisation. We focus predominantly on actors, and in particular the Encore programming language, but our ultimate goal is to define our constructs in such a way that they can be used both with locks and actors, given that combinations of both models occur frequently in actual systems. We discuss the design space, provide several formalisations of different semantics and discuss their properties, and connect them to case studies showing how our proposed constructs can be useful. We also report on an on-going implementation of our proposed constructs in Encore. 

    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-336020 (URN)10.1016/j.jlamp.2018.06.007 (DOI)000444363000008 ()
    Projects
    UPMARC
    Funder
    EU, FP7, Seventh Framework Programme, 612985Swedish Research Council, 2012-4967
    Available from: 2018-06-30 Created: 2017-12-11 Last updated: 2018-11-14Bibliographically approved
    5. OOlong: An Extensible Concurrent Object Calculus
    Open this publication in new window or tab >>OOlong: An Extensible Concurrent Object Calculus
    2018 (English)Conference paper, Published paper (Refereed)
    Abstract [en]

    We present OOlong, an object calculus with interface inheritance, structured concurrency and locks. The goal of the calculus is extensibility and reuse. The semantics are therefore available in a version for LaTeX typesetting (written in Ott), and a mechanised version for doing rigorous proofs in Coq.

    Keywords
    Object Calculi, Semantics, Mechanisation, Concurrency
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-335174 (URN)10.1145/3167132.3167243 (DOI)
    Conference
    Proceedings of SAC 2018: Symposium on Applied Computing , Pau, France, April 9–13, 2018 (SAC 2018)
    Available from: 2017-12-01 Created: 2017-12-01 Last updated: 2018-01-13
  • 19.
    Dahlin, Niklas
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Optimering av takttidtabell för järnvägstrafik på en regional nivå: En fallstudie av fyra Mälarstäder2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Effective commuting is an important part of regional development and attractiveness, where railway traffic is a favourable mode of transportation owing to it being energy efficient and environmentally friendly. Attaining more people to choose the train to commute is therefore desirable. A concept aiming to increase the use of railway traffic is cyclic timetable. At present the concept is most frequently used on a national level but there are possibilities to implement the ideas on a more regional level.

    The purpose of this thesis is to study if and how a cyclic timetable for railway traffic can be constructed and optimised for a region, more specifically the region “Fyra Mälarstäder”. Challenges and opportunities to implement this type of timetable on a regional level are also discussed.

    In order to construct a timetable for railway traffic several infrastructural limitations must be taken into account. An example that extensively limits railway capacity is single tracks. Hence, to be able to construct and optimise the timetable these limitations were formulated, together with a number of other criteria, mathematically as constraints for an optimisation problem. For the optimisation setup the objective function consisted of a sum of weighted trip times within the system, which in turn was minimised.

    Results conclude that a cyclic timetable could successfully be used for “Fyra Mälarstäder”. However, some aspects remain to be investigated, including train line continuation beyond the system boundaries of the study. As for the optimisation, it appears that the weighting of the objective function plays a considerable role to obtain a satisfying timetable. Varying and adjusting certain parameters may also be favourable to achieve a timetable as beneficial as possible.

  • 20.
    Edqvist, Samuel
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Scheduling Physicians using Constraint Programming2008Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 21.
    Elvander, Adam
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Developing a Recommender System for a Mobile E-commerce Application2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis describes the process of conceptualizing and developing a recommendersystem for a peer-to-peer commerce application. The application in question is calledPlick and is a vintage clothes marketplace where private persons and smaller vintageretailers buy and sell secondhand clothes from each other. Recommender systems is arelatively young field of research but has become more popular in recent years withthe advent of big data applications such as Netflix and Amazon. Examples ofrecommender systems being used in e-marketplace applications are however stillsparse and the main contribution of this thesis is insight into this sub-problem inrecommender system research. The three main families of recommender algorithmsare analyzed and two of them are deemed unfitting for the e-marketplace scenario.Out of the third family, collaborative filtering, three algorithms are described,implemented and tested on a large subset of data collected in Plick that consistsmainly of clicks made by users on items in the system. By using both traditional andnovel evaluation techniques it is further shown that a user-based collaborative filteringalgorithm yields the most accurate recommendations when compared to actual userbehavior. This represents a divergence from recommender systems commonly usedin e-commerce applications. The paper concludes with a discussion on the cause andsignificance of this difference and the impact of certain data-preprocessing techniqueson the results.

  • 22.
    Engvall, Maja
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.