Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
Refine search result
123 1 - 50 of 140
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abdulla, Parosh
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Atig, Mohamed Faouzi
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Jonsson, Bengt
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Lång, Magnus
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Ngo, Tuan-Phong
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Sagonas, Konstantinos
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Optimal stateless model checking for reads-from equivalence under sequential consistency2019In: Proceedings of the ACM on Programming Languages, E-ISSN 2475-1421Article in journal (Refereed)
    Abstract [en]

    We present a new approach for stateless model checking (SMC) of multithreaded programs under Sequential Consistency (SC) semantics.  To combat state-space explosion, SMC is often equipped with a partial-order reduction technique, which defines an equivalence on executions, and only needs to explore one execution in each equivalence class.  Recently, it has been observed that the commonly used equivalence of Mazurkiewicz traces can be coarsened but still cover all program crashes and assertion violations.  However, for this coarser equivalence, which preserves only the reads-from relation from writes to reads, there is no SMC algorithm which is (i) optimal in the sense that it explores precisely one execution in each reads-from equivalence class, and (ii) efficient in the sense that it spends polynomial effort per class.  \end{inparaenum} We present the first SMC algorithm for SC that is both optimal and efficient in practice, meaning that it spends polynomial time per equivalence class on all programs that we have tried.  This is achieved by a novel test that checks whether a given reads-from relation can arise in some execution.  Our experimental results show that Nidhugg/rfsc, although slower than the fastest SMC tools in programs where tools happen to examine the same number of executions, always scales similarly or better than them, and outperforms them by an exponential factor in programs where the reads-from equivalence is coarser than the standard one. We also present two non-trivial use cases where the new equivalence is particularly effective, as well as the significant performance advantage that Nidhugg/rfsc offers compared to state-of-the-art SMC and systematic concurrency testing tools.

    Download full text (pdf)
    fulltext
  • 2. Abelson, Harold
    et al.
    Sussman, Gerald Jay
    Henz, Martin
    Wrigstad, Tobias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Structure and Interpretation of Computer Programs: JavaScript Edition2022Book (Other academic)
  • 3.
    Adwent, Ann-Kristin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Shared IT Service Center i kommuner2012Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    To be able to maintain an adequate IT

    service for various users and needs,

    more municipalities are looking at the

    possibility for mergers of local IT

    departments. One solution for merging

    multiple services/functions and creating

    synergy opportunities is called Shared

    Service Center (SSC). The concept of SSC

    is that the administrative service

    itself becomes a core activity within

    the organization. The aim of this thesis

    is to provide municipalities who are

    considering a merging of their local IT

    departments with recommendations on how

    to design the Shared IT Service Center.

    Recommendations are outlined based on an

    analysis of IT-management literature,

    reports and by conducting a study on

    three ongoing collaborations.

    The conclusions drawn from the study

    suggest guidelines for the design of a

    Shared IT Service Center for

    municipalities. These are as following:

    Create a vision associated with a

    specific and structured target state.

    Identify needs for different target

    groups in municipalities and set a

    common standard.

    Create a clear and practical model/SLA

    appearance of the cooperation and

    agreement.

    Ensure the individual municipalities

    commissioning skills in order to not

    lose it in the context of a common IT

    operation.

    Find an organization that is democratic

    for member municipalities and

    facilitates long-term partnership.

    Specify the operation and maintenance so

    that it can be regulated and controlled

    Establish a common help desk.

    Establish a common standard and

    consolidated infrastructure before the

    introduction of a common technology platform.

    Download full text (pdf)
    fulltext
  • 4.
    Ahani, Ghafour
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Optimal Scheduling for Timely Information in Communication Systems2021Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The explosive growth of data in information society poses significant challenges in the timely delivery of information in the context of communication networks. Hence, optimal utilization of scarce network resources is crucial. This dissertation contributes to several aspects related to the timely delivery of information, including scheduling of data flows between sources and destinations in a network, scheduling of content caching in a base station of mobile networks, and scheduling of information collection. Two important metrics, namely, delivery deadline and information freshness, are accounted for. Mathematical models and tailored solution approaches are developed via tools from optimization.

    Five research papers are included in the dissertation. Paper I studies a flow routing and scheduling problem with delivery deadline. This type of problem arises in many applications such as data exchange in scientific projects or data replication in data centers where large amounts of data need to be timely distributed across the globe. Papers II, III, and IV inves­tigate content caching along time in a base station. Content caching at the network’s edge has recently been considered a cost­efficient way of providing users with their requested informa­tion. In Paper II, the schedule for updating the cache is optimized with respect to the content requests of users and the popularity of contents over time. Paper III, as an extension of Paper II, addresses the question of how to keep the cache information fresh, as all contents can not be up­dated due to the limited capacity of the backhaul link. The freshness of information is quantified via the notion of age of information (AoI). Paper IV investigates joint optimization of content caching as well as recommendation; the latter contributes to satisfying content requests in case of a cache miss. Paper V studies optimal scheduling of information collection from a set of sensor nodes via an unmanned aerial vehicle. The objective is to keep the overall AoI as small as possible.

    In these studies, analysis of problem complexity is provided, and time­efficient solution al­gorithms based on column generation, Lagrangian decomposition, and graph labeling are de­veloped. The algorithms also yield a bound of global optimum, that can be used to assess the performance of any given solution. The effectiveness of the algorithms in obtaining near­optimal solutions is demonstrated via extensive simulations.

    List of papers
    1. Routing and scheduling of network flows with deadlines and discrete capacity allocation
    Open this publication in new window or tab >>Routing and scheduling of network flows with deadlines and discrete capacity allocation
    2020 (English)In: Networks, ISSN 0028-3045, E-ISSN 1097-0037, Vol. 76, no 1, p. 54-74Article in journal (Refereed) Published
    Abstract [en]

    Joint scheduling and routing of data flows with deadline constraints in communication networks has been attracting research interest. This type of problem distinguishes from conventional multicommodity flows due to the presence of the time dimension. In this paper, we address a flow routing and scheduling problem with delivery deadline, where the assignment of link capacity occurs in discrete units. Discrete capacity allocation is motivated by applications in communication systems, where it is common to have a base unit of capacity (e.g., wavelength channel in optical communications). We present and prove complexity results of the problem. Next, we give an optimization formulation based on a time slicing approach, which amounts to a discretization of the time into time slices to enable to formulate the deadline constraints. We then derive an effective reformulation of the problem, via which a column generation algorithm is developed. In addition, we propose a simple and fast max-flow-based algorithm. We use a number of networks and traffic scenarios to study various performance aspects of the algorithms.

    Place, publisher, year, edition, pages
    WILEY, 2020
    Keywords
    deadline, discrete capacity allocation, flow routing, flow scheduling, networks
    National Category
    Communication Systems
    Identifiers
    urn:nbn:se:uu:diva-418562 (URN)10.1002/net.21939 (DOI)000538309900003 ()
    Available from: 2020-09-03 Created: 2020-09-03 Last updated: 2021-10-22Bibliographically approved
    2. Optimal scheduling of content caching subject to deadline
    Open this publication in new window or tab >>Optimal scheduling of content caching subject to deadline
    2020 (English)In: IEEE Open Journal of the Communications Society, E-ISSN 2644-125X, Vol. 1, p. 293-307Article in journal (Refereed) Published
    Abstract [en]

    Content caching at the edge of network is a promising technique to alleviate the burden of backhaul networks. In this paper, we consider content caching along time in a base station with limited cache capacity. As the popularity of contents may vary over time, the contents of cache need to be updated accordingly. In addition, a requested content may have a delivery deadline within which the content needs to be obtained. Motivated by these, we address optimal scheduling of content caching in a time-slotted system under delivery deadline and cache capacity constraints. The objective is to minimize a cost function that captures the load of backhaul links. For our optimization problem, we prove its NP-hardness via a reduction from the Partition problem. For problem solving, via a mathematical reformulation, we develop a solution approach based on repeatedly applying a column generation algorithm and a problem-tailored rounding algorithm. In addition, two greedy algorithms are developed based on existing algorithms from the literature. Finally, we present extensive simulations that verify the effectiveness of our solution approach in obtaining near-to-optimal solutions in comparison to the greedy algorithms. The solutions obtained from our solution approach are within 1.6% from global optimality.

    Place, publisher, year, edition, pages
    IEEE, 2020
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-423414 (URN)10.1109/OJCOMS.2020.2978585 (DOI)000723372400020 ()
    Funder
    Swedish Research Council
    Available from: 2020-10-24 Created: 2020-10-24 Last updated: 2022-04-27Bibliographically approved
    3. Optimal Scheduling of Age-centric Caching: Tractability and Computation
    Open this publication in new window or tab >>Optimal Scheduling of Age-centric Caching: Tractability and Computation
    2022 (English)In: IEEE Transactions on Mobile Computing, ISSN 1536-1233, E-ISSN 1558-0660, Vol. 21, p. 2939-2954Article in journal (Refereed) Published
    Abstract [en]

    The notion of age of information (AoI) has become an important performance metric in network and control systems. Information freshness, represented by AoI, naturally arises in the context of caching. We address optimal scheduling of cache updates for a time-slotted system where the contents vary in size. There is limited capacity for the cache for making updates. Each content is associated with a utility function that depends on the AoI and the time duration of absence from the cache. For this combinatorial optimization problem, we present the following contributions. First, we provide theoretical results of problem tractability. Whereas the problem is NP-hard, we prove solution tractability in polynomial time for a special case with uniform content size, by a reformulation using network flows. Second, we derive an integer linear formulation for the problem, of which the optimal solution can be obtained for small-scale scenarios. Next, via a mathematical reformulation, we derive a scalable optimization algorithm using repeated column generation. In addition, the algorithm computes a bound of global optimum, that can be used to assess the performance of any scheduling solution. Performance evaluation of large-scale scenarios demonstrates the strengths of the algorithm in comparison to a greedy schedule. Finally, we extend the applicability of our work to cyclic scheduling.

    Place, publisher, year, edition, pages
    IEEE, 2022
    Keywords
    Computer Networks and Communications, age of information, scheduling, optimization
    National Category
    Communication Systems Telecommunications
    Identifiers
    urn:nbn:se:uu:diva-456270 (URN)10.1109/tmc.2020.3045104 (DOI)000819824200019 ()
    Available from: 2021-10-17 Created: 2021-10-17 Last updated: 2022-08-04Bibliographically approved
    4. Optimal Scheduling of Content Caching with recommendation and Freshness of Information
    Open this publication in new window or tab >>Optimal Scheduling of Content Caching with recommendation and Freshness of Information
    (English)In: Article in journal (Refereed) Submitted
    Abstract [en]

    Content caching at the network edge has been considered an effective way of mitigating backhaul load and improving user experience. Caching efficiency can be enhanced by content recommendation and by keeping the information as fresh as possible. To the best of our knowledge, there is no work that jointly takes into account both these aspects in content caching. By content recommendation, a requested content that is not in the cache can be alternatively satisfied by a related cached content recommended by the system. Informationfreshness, arising naturally in content caching can be quantified by age of information (AoI). We address, optimal scheduling of cache updates for a time-slotted system accounting for both content recommendation and AoI, subject to cache capacity as well as backhaul capacity for updating the cache. For each content, there are requests that need to be satisfied and there is a cost function capturing the freshness of information which is monotonically increasing in the AoI. We present the following contributions. First, we prove that the problem is NP-hard even with contents of uniform size. Second, we derive an integer linear formulation for the problem, by which the optimal solution can be obtained for small-scale scenarios. Third, we develop an algorithm based on Lagrangian decomposition. Fourth, we develop efficient algorithms for solving the resulting subproblems. In addition, our algorithm computes a bound that can be used to evaluate the performance of any suboptimal solution. Finally, we conduct simulations to show the effectiveness of our algorithm in comparison to a greedy schedule.

    National Category
    Communication Systems
    Identifiers
    urn:nbn:se:uu:diva-456272 (URN)
    Available from: 2021-10-17 Created: 2021-10-17 Last updated: 2021-10-22
    5. Age-Optimal UAV Scheduling for Data Collection With Battery Recharging
    Open this publication in new window or tab >>Age-Optimal UAV Scheduling for Data Collection With Battery Recharging
    2021 (English)In: IEEE Communications Letters, ISSN 1089-7798, E-ISSN 1558-2558, Vol. 25, no 4, p. 1254-1258Article in journal (Refereed) Published
    Abstract [en]

    We study route scheduling of a UAV for data collection from sensor nodes (SNs) with battery recharging. The freshness of the collected information is captured by the metric of age of information (AoI). The objective is to minimize the average AoI cost of all SNs over a scheduling time horizon. We prove that the problem in its general form is NP-hard. Then, for a special case of the problem, we prove that optimum can be computed in polynomial time. Next, we develop an algorithm based on graph labeling. Finally, we show the effectiveness of our algorithm in comparison to greedy scheduling.

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers (IEEE), 2021
    Keywords
    Batteries, Data collection, Measurement, Heuristic algorithms, Cost function, Topology, Processor scheduling, Age of information, path planning, scheduling, UAV
    National Category
    Communication Systems
    Identifiers
    urn:nbn:se:uu:diva-443450 (URN)10.1109/LCOMM.2020.3047909 (DOI)000639708000044 ()
    Available from: 2021-05-31 Created: 2021-05-31 Last updated: 2023-05-24Bibliographically approved
    Download full text (pdf)
    UUThesis_Ahani,G_2021
    Download (jpg)
    presentationsbild
  • 5.
    Ahani, Ghafour
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Yuan, Di
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Accounting for Information Freshness in Scheduling of Content Caching2020In: ICC 2020: 2020 IEEE International Conference on Communications (ICC), 2020Conference paper (Refereed)
    Abstract [en]

    In this paper, we study the problem of optimal scheduling of content placement along time in a base station with limited cache capacity, taking into account jointly the offloading effect and freshness of information. We model offloading based on popularity in terms of the number of requests and information freshness based on the notion of age of information (AoI). The objective is to reduce the load of backhaul links as well as the AoI of contents in the cache via a joint cost function. For the resulting optimization problem, we prove its hardness via a reduction from the Partition problem. Next, via a mathematical reformulation, we derive a solution approach based on column generation and a tailored rounding mechanism. Finally, we provide performance evaluation results showing that our algorithm provides near-optimal solutions.

  • 6.
    Ahani, Ghafour
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Komar Univ Sci & Technol, Dept Comp Sci, Sulaymaniyah 75105, Iraq.
    Yuan, Di
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Optimal Content Caching and Recommendation With Age of Information2024In: IEEE Transactions on Mobile Computing, ISSN 1536-1233, E-ISSN 1558-0660, Vol. 23, no 1, p. 689-704Article in journal (Refereed)
    Abstract [en]

    Content caching at the network edge is an effective way of mitigating backhaul load and improving user experience. Caching efficiency can be enhanced by content recommendation and by keeping the information fresh. By content recommendation, a requested content that is not in the cache can be alternatively satisfied by a related cached content recommended by the system. Information freshness can be quantified by age of information (AoI). This article has the following contributions. First, we address optimal scheduling of cache updates for a time-slotted system accounting for content recommendation and AoI, and to the best of our knowledge, there is no work that has jointly taken into account these aspects. Next, we rigorously prove the problem's NP-hardness. Then, we derive an integer linear formulation, by which the optimal solution can be obtained for small-scale scenarios. On the algorithmic side, our contributions include the development of an effective algorithm based on Lagrangian decomposition, and efficient algorithms for solving the resulting subproblems. Our algorithm computes a bound that can be used to evaluate the performance of any suboptimal solution. We conduct simulations to show the effectiveness of our algorithm.

  • 7.
    Ahani, Ghafour
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Yuan, Di
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Optimal scheduling of content caching subject to deadline2020In: IEEE Open Journal of the Communications Society, E-ISSN 2644-125X, Vol. 1, p. 293-307Article in journal (Refereed)
    Abstract [en]

    Content caching at the edge of network is a promising technique to alleviate the burden of backhaul networks. In this paper, we consider content caching along time in a base station with limited cache capacity. As the popularity of contents may vary over time, the contents of cache need to be updated accordingly. In addition, a requested content may have a delivery deadline within which the content needs to be obtained. Motivated by these, we address optimal scheduling of content caching in a time-slotted system under delivery deadline and cache capacity constraints. The objective is to minimize a cost function that captures the load of backhaul links. For our optimization problem, we prove its NP-hardness via a reduction from the Partition problem. For problem solving, via a mathematical reformulation, we develop a solution approach based on repeatedly applying a column generation algorithm and a problem-tailored rounding algorithm. In addition, two greedy algorithms are developed based on existing algorithms from the literature. Finally, we present extensive simulations that verify the effectiveness of our solution approach in obtaining near-to-optimal solutions in comparison to the greedy algorithms. The solutions obtained from our solution approach are within 1.6% from global optimality.

    Download full text (pdf)
    fulltext
  • 8.
    Ahani, Ghafour
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Yuan, Di
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Sun, Sumei
    Institute for Infocomm Research, Singapore, Singapore;Singapore Institute of Technology, Singapore, Singapore.
    Optimal Scheduling of Age-centric Caching: Tractability and Computation2022In: IEEE Transactions on Mobile Computing, ISSN 1536-1233, E-ISSN 1558-0660, Vol. 21, p. 2939-2954Article in journal (Refereed)
    Abstract [en]

    The notion of age of information (AoI) has become an important performance metric in network and control systems. Information freshness, represented by AoI, naturally arises in the context of caching. We address optimal scheduling of cache updates for a time-slotted system where the contents vary in size. There is limited capacity for the cache for making updates. Each content is associated with a utility function that depends on the AoI and the time duration of absence from the cache. For this combinatorial optimization problem, we present the following contributions. First, we provide theoretical results of problem tractability. Whereas the problem is NP-hard, we prove solution tractability in polynomial time for a special case with uniform content size, by a reformulation using network flows. Second, we derive an integer linear formulation for the problem, of which the optimal solution can be obtained for small-scale scenarios. Next, via a mathematical reformulation, we derive a scalable optimization algorithm using repeated column generation. In addition, the algorithm computes a bound of global optimum, that can be used to assess the performance of any scheduling solution. Performance evaluation of large-scale scenarios demonstrates the strengths of the algorithm in comparison to a greedy schedule. Finally, we extend the applicability of our work to cyclic scheduling.

  • 9.
    Andersson, Andreas
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Utilizing Neural Networks To Adaptively Demodulate And Decode Signals In An Impulsive Environment2023Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Electromagnetic disturbance can be detrimental to the performance of a radio communication system, and in today’s society where more and more electronic devices are present in our everyday life it is increasingly vital to consider man-made interference. A communication system can take into consideration the noise characteristics and by doing so will excel in such areas, however, this follows that the algorithms utilized in such systems are more computationally complex and are therefore slow. This master thesis aims to explore the possibility of a neural network-based solution that reaches the same accuracy, as existing methods, but more quickly. Numerous different existing model alternatives have been explored and a plethora of different improvement techniques have been outlined. Two models, Hannet and Lannet, have been designed and improved to enable adaptive demodulation both including or excluding decoding at the receiver in an end-to-end communication system. The evaluation results demonstrate that the proposed models are comparable and in some cases even more accurate than current standardized methods. However, the models are unable to fully learn the decoding algorithms present in the experiments. Thus even though demodulation by itself thrives, performing decoding in conjunction with demodulation is out of reach for these models.

    Download full text (pdf)
    fulltext
  • 10.
    Andrejev, Andrej
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Semantic Web Queries over Scientific Data2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Semantic Web and Linked Open Data provide a potential platform for interoperability of scientific data, offering a flexible model for providing machine-readable and queryable metadata. However, RDF and SPARQL gained limited adoption within the scientific community, mainly due to the lack of support for managing massive numeric data, along with certain other important features – such as extensibility with user-defined functions, query modularity, and integration with existing environments and workflows.

    We present the design, implementation and evaluation of Scientific SPARQL – a language for querying data and metadata combined, represented using the RDF graph model extended with numeric multidimensional arrays as node values – RDF with Arrays. The techniques used to store RDF with Arrays in a scalable way and process Scientific SPARQL queries and updates are implemented in our prototype software – Scientific SPARQL Database Manager, SSDM, and its integrations with data storage systems and computational frameworks. This includes scalable storage solutions for numeric multidimensional arrays and an efficient implementation of array operations. The arrays can be physically stored in a variety of external storage systems, including files, relational databases, and specialized array data stores, using our Array Storage Extensibility Interface. Whenever possible SSDM accumulates array operations and accesses array contents in a lazy fashion.

    In scientific applications numeric computations are often used for filtering or post-processing the retrieved data, which can be expressed in a functional way. Scientific SPARQL allows expressing common query sub-tasks with functions defined as parameterized queries. This becomes especially useful along with functional language abstractions such as lexical closures and second-order functions, e.g. array mappers.

    Existing computational libraries can be interfaced and invoked from Scientific SPARQL queries as foreign functions. Cost estimates and alternative evaluation directions may be specified, aiding the construction of better execution plans. Costly array processing, e.g. filtering and aggregation, is thus preformed on the server, saving the amount of communication. Furthermore, common supported operations are delegated to the array storage back-ends, according to their capabilities. Both expressivity and performance of Scientific SPARQL are evaluated on a real-world example, and further performance tests are run using our mini-benchmark for array queries.

    Download full text (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 11.
    Aronis, Stavros
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Effective Techniques for Stateless Model Checking2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Stateless model checking is a technique for testing and verifying concurrent programs, based on exploring the different ways in which operations executed by the processes of a concurrent program can be scheduled. The goal of the technique is to expose all behaviours that can be a result of scheduling non-determinism. As the number of possible schedulings is huge, however, techniques that reduce the number of schedulings that must be explored to achieve verification have been developed. Dynamic partial order reduction (DPOR) is a prominent such technique.

    This dissertation presents a number of improvements to dynamic partial order reduction that significantly increase the effectiveness of stateless model checking. Central among these improvements are the Source and Optimal DPOR algorithms (and the theoretical framework behind them) and a technique that allows the observability of the interference of operations to be used in dynamic partial order reduction. Each of these techniques can exponentially decrease the number of schedulings that need to be explored to verify a concurrent program. The dissertation also presents a simple bounding technique that is compatible with DPOR algorithms and effective for finding bugs in concurrent programs, if the number of schedulings is too big to make full verification possible in a reasonable amount of time, even when the improved algorithms are used.

    All improvements have been implemented in Concuerror, a tool for applying stateless model checking to Erlang programs. In order to increase the effectiveness of the tool, the interference of the high-level operations of the Erlang/OTP implementation is examined, classified and precisely characterized. Aspects of the implementation of the tool are also described. Finally, a use case is presented, showing how Concuerror was used to find bugs and verify key correctness properties in repair techniques for the CORFU chain replication protocol.

    List of papers
    1. Source Sets: A Foundation for Optimal Dynamic Partial Order Reduction
    Open this publication in new window or tab >>Source Sets: A Foundation for Optimal Dynamic Partial Order Reduction
    2017 (English)In: Journal of the ACM, ISSN 0004-5411, E-ISSN 1557-735X, Vol. 64, no 4, article id 25Article in journal (Refereed) Published
    Abstract [en]

    Stateless model checking is a powerful method for program verification that, however, suffers from an exponential growth in the number of explored executions. A successful technique for reducing this number, while still maintaining complete coverage, is Dynamic Partial Order Reduction (DPOR), an algorithm originally introduced by Flanagan and Godefroid in 2005 and since then not only used as a point of reference but also extended by various researchers. In this article, we present a new DPOR algorithm, which is the first to be provably optimal in that it always explores the minimal number of executions. It is based on a novel class of sets, called source sets, that replace the role of persistent sets in previous algorithms. We begin by showing how to modify the original DPOR algorithm to work with source sets, resulting in an efficient and simple-to-implement algorithm, called source-DPOR. Subsequently, we enhance this algorithm with a novel mechanism, called wakeup trees, that allows the resulting algorithm, called optimal-DPOR, to achieve optimality. Both algorithms are then extended to computational models where processes may disable each other, for example, via locks. Finally, we discuss tradeoffs of the source-and optimal-DPOR algorithm and present programs that illustrate significant time and space performance differences between them. We have implemented both algorithms in a publicly available stateless model checking tool for Erlang programs, while the source-DPOR algorithm is at the core of a publicly available stateless model checking tool for C/pthread programs running on machines with relaxed memory models. Experiments show that source sets significantly increase the performance of stateless model checking compared to using the original DPOR algorithm and that wakeup trees incur only a small overhead in both time and space in practice.

    Place, publisher, year, edition, pages
    Association for Computing Machinery (ACM), 2017
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:uu:diva-331842 (URN)10.1145/3073408 (DOI)000410615900002 ()
    Projects
    UPMARCRELEASE
    Funder
    EU, FP7, Seventh Framework Programme, 287510Swedish Research Council
    Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2018-01-13Bibliographically approved
    2. The shared-memory interferences of Erlang/OTP built-ins
    Open this publication in new window or tab >>The shared-memory interferences of Erlang/OTP built-ins
    2017 (English)In: Proceedings Of The 16Th Acm Sigplan International Workshop On Erlang (Erlang '17) / [ed] Chechina, N.; Fritchie, SL., New York: Association for Computing Machinery (ACM), 2017, p. 43-54Conference paper, Published paper (Refereed)
    Abstract [en]

    Erlang is a concurrent functional language based on the actor modelof concurrency. In the purest form of this model, actors are realizedby processes that do not share memory and communicate witheach other exclusively via message passing. Erlang comes quiteclose to this model, as message passing is the primary form of interprocesscommunication and each process has its own memoryarea that is managed by the process itself. For this reason, Erlangis often referred to as implementing “shared nothing” concurrency.Although this is a convenient abstraction, in reality Erlang’s mainimplementation, the Erlang/OTP system, comes with a large numberof built-in operations that access memory which is shared byprocesses. In this paper, we categorize these built-ins, and characterizethe interferences between them that can result in observabledifferences of program behaviour when these built-ins are usedin a concurrent setting. The paper is complemented by a publiclyavailable suite of more than one hundred small Erlang programsthat demonstrate the racing behaviour of these built-ins.

    Place, publisher, year, edition, pages
    New York: Association for Computing Machinery (ACM), 2017
    Keywords
    Actors, BEAM, Concuerror, Erlang, Scheduling nondeterminism
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:uu:diva-331840 (URN)10.1145/3123569.3123573 (DOI)000426922100005 ()978-1-4503-5179-9 (ISBN)
    Conference
    16th ACM SIGPLAN International Workshop on Erlang (Erlang), Sep 08, 2017 , Oxford, England.
    Projects
    UPMARC
    Funder
    Swedish Research Council
    Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2018-07-03Bibliographically approved
    3. Testing And Verifying Chain Repair Methods For CORFU Using Stateless Model Checking
    Open this publication in new window or tab >>Testing And Verifying Chain Repair Methods For CORFU Using Stateless Model Checking
    2017 (English)Conference paper, Published paper (Refereed)
    Abstract [en]

    Corfu is a distributed shared log that is designed to be scalable and reliable in the presence of failures and asynchrony. Internally, Corfu is fully replicated for fault tolerance, without sharding data or sacrificing strong consistency. In this case study, we present the modeling approaches we followed to test and verify, using Concuerror, the correctness of repair methods for the Chain Replication protocol suitable for Corfu. In the first two methods we tried, Concuerror located bugs quite fast. In contrast, the tool did not manage to find bugs in the third method, but the time this took also motivated an improvement in the tool that reduces the number of traces explored. Besides more details about all the above, we present experiences and lessons learned from applying stateless model checking for verifying complex protocols suitable for distributed programming.

    Place, publisher, year, edition, pages
    Cham: Springer, 2017
    Series
    Lecture Notes in Computer Science ; 10510
    National Category
    Computer Systems Software Engineering
    Identifiers
    urn:nbn:se:uu:diva-331836 (URN)10.1007/978-3-319-66845-1_15 (DOI)978-3-319-66844-4 (ISBN)978-3-319-66845-1 (ISBN)
    Conference
    Integrated Formal Methods. IFM 2017
    Projects
    UPMARC
    Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2018-01-13Bibliographically approved
    4. Optimal dynamic partial order reduction with observers
    Open this publication in new window or tab >>Optimal dynamic partial order reduction with observers
    2018 (English)In: Tools and Algorithms for the Construction and Analysis of Systems: Part II, Springer, 2018, Vol. 10806, p. 229-248Conference paper, Published paper (Refereed)
    Abstract [en]

    Dynamic partial order reduction (DPOR) algorithms are used in stateless model checking (SMC) to combat the combinatorial explosion in the number of schedulings that need to be explored to guarantee soundness. The most effective of them, the Optimal DPOR algorithm, is optimal in the sense that it explores only one scheduling per Mazurkiewicz trace. In this paper, we enhance DPOR with the notion of observability, which makes dependencies between operations conditional on the existence of future operations, called observers. Observers naturally lead to a lazy construction of dependencies. This requires significant changes in the core of POR algorithms (and Optimal DPOR in particular), but also makes the resulting algorithm, Optimal DPOR with Observers, super-optimal in the sense that it explores exponentially less schedulings than Mazurkiewicz traces in some cases. We argue that observers come naturally in many concurrency models, and demonstrate the performance benefits that Optimal DPOR with Observers achieves in both an SMC tool for shared memory concurrency and a tool for concurrency via message passing, using both synthetic and actual programs as benchmarks.

    Place, publisher, year, edition, pages
    Springer, 2018
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 10806
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-333508 (URN)10.1007/978-3-319-89963-3 _ 14 (DOI)000445822600014 ()978-3-319-89962-6 (ISBN)978-3-319-89963-3 (ISBN)
    Conference
    TACAS 2018, April 14–20, Thessaloniki, Greece.
    Projects
    UPMARC
    Funder
    Swedish Research Council
    Available from: 2018-04-14 Created: 2017-11-21 Last updated: 2019-01-07Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 12.
    Arvidsson, Ellen
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Castegren, Elias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Clebsch, Sylvan
    Drossopoulou, Sophia
    Noble, James
    Parkinson, Matthew J.
    Wrigstad, Tobias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Reference Capabilities for Flexible Memory Management2023In: Proceedings of the ACM on Programming Languages, E-ISSN 2475-1421, Vol. 7, no OOPSLA2, p. 1363-1393, article id 270Article in journal (Refereed)
    Abstract [en]

    Verona is a concurrent object-oriented programming language that organises all the objects in a program into a forest of isolated regions. Memory is managed locally for each region, so programmers can control a program's memory use by adjusting objects' partition into regions, and by setting each region's memory management strategy. A thread can only mutate (allocate, deallocate) objects within one active region---its "window of mutability". Memory management costs are localised to the active region, ensuring overheads can be predicted and controlled. Moving the mutability window between regions is explicit, so code can be executed wherever it is required, yet programs remain in control of memory use. An ownership type system based on reference capabilities enforces region isolation, controlling aliasing within and between regions, yet supporting objects moving between regions and threads. Data accesses never need expensive atomic operations, and are always thread-safe.

    Download full text (pdf)
    fulltext
  • 13.
    Astrid, Berghult
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    A practical comparison between algebraic and statistical attacks on the lightweight cipher SIMON2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the summer of 2013 NSA released a new family of lightweight block ciphers called SIMON. However they did not publish any assessment of the security of SIMON. Since then only a few papers on this topic have been released and none of them have included an algebraic analysis. Moreover only one paper described a practical implementation of the attack. This master thesis aims to implement a practical attack, both algebraic and differential, on SIMON. In doing so we are able to make a comparison between the two different attack methods. The algebraic attack was executed with SAT-solver CryptoMiniSat2 and could break 7 rounds. The differential attack was implemented in three steps. First we created a difference distribution table (DDT) and then we identified a differential by a search algorithm for the DDT. In the last step we designed a key recovery attack to recover the last round key. The attack could break 13 rounds for a 9 round differential. With a simple expansion on the key recovery attack it has the potential to break even more rounds for the same 9 round differential. This indicate that algebraic cryptanalysis might not be such a strong tool since it could only break 7 rounds. Furthermore, if a generic algebraic attack does not work on SIMON it has little or no chance of being successful on a more complex cipher. In other words this algebraic attack may serve as a benchmark for the efficiency of generic algebraic attacks.

    Download full text (pdf)
    fulltext
  • 14.
    Axel, Lindegren
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Partitioning temporal networks: A study of finding the optimal partition of temporal networks using community detection2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Many of the algorithms used for community detection in temporal networks have been adapted from static network theory. A common approach in dealing with the temporal dimension is to create multiple static networks from one temporal, based on a time condition. In this thesis, focus lies on identifying the optimal partitioning of a few temporal networks. This is done by utilizing the popular community detection algorithm called Generalized Louvain. Output of the Generalized Louvain comes in two parts. First, the created community structure, i.e. how the network is connected. Secondly, a measure called modularity, which is a scalar value representing the quality of the identified community structure. The methodology used is aimed at creating a comparable result by normalizing modularity. The normalization process can be explained in two major steps: 1) study the effects on modularity when partitioning a temporal network in an increasing number of slices. 2) study the effects on modularity when varying the number of connections (edges) in each time slice. The results show that the created methodology yields comparable results on two out of the four here tested temporal networks, implying that it might be more suited for some networks than others. This can serve as an indication that there does not exist a general model for community detection in temporal networks. Instead, the type of network is key to choosing the method.

    Download full text (pdf)
    fulltext
  • 15.
    Aziz, Yama
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Exploring a keyword driven testing framework: a case study at Scania IT2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this thesis is to investigate organizational quality assurance through the international testing standard ISO 29119. The focus will be on how an organization carries out testing processes and designs and implements test cases. Keyword driven testing is a test composition concept in ISO 29119 and suitable for automation. This thesis will answer how keyword driven testing can facilitate the development of maintainable test cases and support test automation in an agile organization.

    The methodology used was a qualitative case study including semi-structured interviews and focus groups with agile business units within Scania IT. Among the interview participants were developers, test engineers, scrum masters and a unit manager.

    The results describe testing practices carried out in several agile business units, maintainability issues with test automation and general ideas of how test automation should be approached. Common issues with test automation were test cases failing due to changed test inputs, inexperience with test automation frameworks and lack of resources due to project release cycle.

    This thesis concludes that keyword driven testing has the potential of solving several maintainability issues with test cases breaking. However, the practicality and effectiveness of said potential remain unanswered. Moreover, successfully developing an automated keyword driven testing framework requires integration with existing test automation tools and considering the agile organizational circumstances.

    Download full text (pdf)
    Exploring a keyword driven testing framework
  • 16.
    Backlund, Ludvig
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    A technical overview of distributed ledger technologies in the Nordic capital market.2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis examines how Distributed Ledger Technologies (DLTs) could be utilized in capital markets in general and in the Nordic capital market in particular. DLTs were introduced with the so called cryptocurrency Bitcoin in 2009 and has in the last few years been of interest to various financial institutions as a means to streamline financial processes. By combining computer scientific concepts such as public-key cryptography and consensus algorithms DLTs make it possible to keep shared databases with limited trust among the participators and without the use of a trusted third party. In this thesis various actors on the Nordic capital market were interviewed and their stance on DLTs were summarized. In addition to this a Proof of Concept of a permissioned DLT application for ownership registration of securities was constructed. It was found that all the interviewees were generally optimistic about DLTs potential to increase the efficiency of capital markets. The technology needs to be adopted to handle the capital markets demand for privacy and large transaction volumes, but there is a general agreement among the interviewees that these issues will be solved. The biggest challenge for an adoption of DLTs seem to lie in that of finding a common industry-wide standard.

    Download full text (pdf)
    fulltext
  • 17.
    Badiozamany, Sobhan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Real-time data stream clustering over sliding windows2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In many applications, e.g. urban traffic monitoring, stock trading, and industrial sensor data monitoring, clustering algorithms are applied on data streams in real-time to find current patterns. Here, sliding windows are commonly used as they capture concept drift.

    Real-time clustering over sliding windows is early detection of continuously evolving clusters as soon as they occur in the stream, which requires efficient maintenance of cluster memberships that change as windows slide.

    Data stream management systems (DSMSs) provide high-level query languages for searching and analyzing streaming data. In this thesis we extend a DSMS with a real-time data stream clustering framework called Generic 2-phase Continuous Summarization framework (G2CS).  G2CS modularizes data stream clustering by taking as input clustering algorithms which are expressed in terms of a number of functions and indexing structures. G2CS supports real-time clustering by efficient window sliding mechanism and algorithm transparent indexing. A particular challenge for real-time detection of a high number of rapidly evolving clusters is efficiency of window slides for clustering algorithms where deletion of expired data is not supported, e.g. BIRCH. To that end, G2CS includes a novel window maintenance mechanism called Sliding Binary Merge (SBM). To further improve real-time sliding performance, G2CS uses generation-based multi-dimensional indexing where indexing structures suitable for the clustering algorithms can be plugged-in.

    List of papers
    1. Scalable ordered indexing of streaming data
    Open this publication in new window or tab >>Scalable ordered indexing of streaming data
    2012 (English)In: 3rd International Workshop on Accelerating Data Management Systems using Modern Processor and Storage Architectures, 2012, p. 11-Conference paper, Published paper (Refereed)
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-185068 (URN)
    Conference
    ADMS 2012, Istanbul, Turkey
    Projects
    eSSENCE
    Available from: 2012-08-27 Created: 2012-11-19 Last updated: 2018-01-12Bibliographically approved
    2. Grand challenge: Implementation by frequently emitting parallel windows and user-defined aggregate functions
    Open this publication in new window or tab >>Grand challenge: Implementation by frequently emitting parallel windows and user-defined aggregate functions
    Show others...
    2013 (English)In: Proc. 7th ACM International Conference on Distributed Event-Based Systems, New York: ACM Press, 2013, p. 325-330Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    New York: ACM Press, 2013
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-211954 (URN)10.1145/2488222.2488284 (DOI)978-1-4503-1758-0 (ISBN)
    External cooperation:
    Conference
    DEBS 2013
    Available from: 2013-06-29 Created: 2013-12-03 Last updated: 2018-01-11Bibliographically approved
    3. Distributed multi-query optimization of continuous clustering queries
    Open this publication in new window or tab >>Distributed multi-query optimization of continuous clustering queries
    2014 (English)In: Proc. VLDB 2014 PhD Workshop, 2014Conference paper, Published paper (Refereed)
    Abstract [en]

    This work addresses the problem of sharing execution plans for queries that continuously cluster streaming data to provide an evolving summary of the data stream. This is challenging since clustering is an expensive task, there might be many clustering queries running simultaneously, each continuous query has a long life time span, and the execution plans often overlap. Clustering is similar to conventional grouped aggregation but cluster formation is more expensive than group formation, which makes incremental maintenance more challenging. The goal of this work is to minimize response time of continuous clustering queries with limited resources through multi-query optimization. To that end, strategies for sharing execution plans between continuous clustering queries are investigated and the architecture of a system is outlined that optimizes the processing of multiple such queries. Since there are many clustering algorithms, the system should be extensible to easily incorporate user defined clustering algorithms.

    National Category
    Computer Sciences
    Research subject
    Computer Science with specialization in Database Technology
    Identifiers
    urn:nbn:se:uu:diva-302790 (URN)
    External cooperation:
    Conference
    VLDB 2014
    Available from: 2016-09-09 Created: 2016-09-09 Last updated: 2018-01-10Bibliographically approved
    4. Framework for real-time clustering over sliding windows
    Open this publication in new window or tab >>Framework for real-time clustering over sliding windows
    2016 (English)In: Proc. 28th International Conference on Scientific and Statistical Database Management, New York: ACM Press, 2016, p. 1-13, article id 19Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    New York: ACM Press, 2016
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-302792 (URN)10.1145/2949689.2949696 (DOI)978-1-4503-4215-5 (ISBN)
    External cooperation:
    Conference
    SSDBM 2016
    Available from: 2016-07-18 Created: 2016-09-09 Last updated: 2018-01-10Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (jpg)
    preview image
  • 18.
    Bergquist, Jonatan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Blockchain Technology and Smart Contracts: Privacy-preserving Tools2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this Master's thesis is to explore blockchain technology and smart contracts as a way of building privacy-sensitive applications. The main focus is on a medication plan containing prescriptions, built on a blockchain system of smart contracts. This is an example use case, but the results can be transferred to other ones where sensitive data is being shared and a proof of validity or authentication is needed. First the problem is presented, why medication plans are in need of digitalisation and why blockchain technology is a fitting technology for implementing such an application. Then blockchain technology is explained, since it is a very new and relatively unfamiliar IT construct. Thereafter, a design is proposed for solving the problem. A system of smart contracts was built to prove how such an application can be built, and suggested guidelines for how a blockchain system should be designed to fulfil the requirements that were defined. Finally, a discussion is held regarding the applicability of different blockchain designs to the problem of privacy-handling applications.

    Download full text (pdf)
    fulltext
  • 19.
    Bernström, Kristian
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Näsman, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Utredning och implementering av en prototyp för integration av Prevas FOCS och ABB 800xA2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    ABB and Prevas have initiated a collaboration to sell a system to optimize steel industry furnaces, called FOCS. The purpose of this thesis is to investigate possibilities for integrating Prevas FOCS and ABB 800xA.

    The result of the investigation is used for an implementation of a prototype of the integrated system. The study shows a general method that can be used when implementing two software systems. The prototype of the integrated systems is made with usability principles in mind. This is a very important aspect in order to create a good working environment for the operators of a steel plant. It is also important to follow communication standards when integrating software systems. In an industrial network used in the steel industry OPC is a standard for communication. We recommend ABB and Prevas to follow this standard when possible to make the integration smoother. To keep the cost of the integration to a minimum it is also recommended to reuse existing resources. This can however have a negative effect on usability and it is therefore important to keep a balance between cost and usability.

    The prototype made in this thesis accomplishes the goal of transferring the functionalities used by operators of Prevas FOCS to 800xA so that operators can control the processes using only one integrated system.

    Download full text (pdf)
    fulltext
  • 20.
    Björdal, Gustav
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    From Declarative Models to Local Search2021Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    A solver is a general-purpose software for solving optimisation problems. It takes as input a description of a problem, called a model, and uses a collection of algorithms, called its solving technology, to ideally produce an optimal solution as output. Most solvers have a modelling language that cannot be processed by other solvers. This means that there is a risk of making an early commitment to a solver and its technology when writing a model. To address this risk, and to increase the accessibility of solvers, there has been a push for technology-independent modelling languages, a notable one being MiniZinc.

    A model written in MiniZinc is transformed by the MiniZinc toolchain in order to suit a targeted solver and its technology. However, for a solver to process a MiniZinc model, it also requires what is called a backend for MiniZinc. A backend translates the transformed MiniZinc model into the solver’s own modelling language and synthesises any components not in a MiniZinc model that the solver (or its technology) requires.

    The solving technology called constraint-based local search (CBLS) is based on the popular algorithm design methodology called local search, which often quickly produces near-optimal solutions, even to large problems. So, with the advent of CBLS solvers, there is a need for CBLS backends to modelling languages like MiniZinc.

    This thesis contributes to three research topics. First, it shows for the first time how to create a CBLS backend for a technology-independent modelling language, namely MiniZinc, and it shows that CBLS via MiniZinc can be competitive for solving optimisation problems. Second, it extends MiniZinc with concepts from local search, and shows that these concepts can be used even by other technologies towards designing new types of solvers. Third, it extends the utilisation of another technology, namely constraint programming, inside local-search solvers and backends.

    These contributions make local search accessible to all users of modelling languages like MiniZinc, and allow some optimisation problems to be solved more efficiently via such languages.

    List of papers
    1. A constraint-based local search backend for MiniZinc
    Open this publication in new window or tab >>A constraint-based local search backend for MiniZinc
    2015 (English)In: Constraints, ISSN 1383-7133, E-ISSN 1572-9354, Vol. 20, no 3, p. 325-345Article in journal (Refereed) Published
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-255035 (URN)10.1007/s10601-015-9184-z (DOI)000354216900004 ()
    Available from: 2015-03-07 Created: 2015-06-12 Last updated: 2021-03-03Bibliographically approved
    2. Generating compound moves in local search by hybridisation with complete search
    Open this publication in new window or tab >>Generating compound moves in local search by hybridisation with complete search
    2019 (English)In: Integration of Constraint Programming, Artificial Intelligence, and Operations Research / [ed] L.-M. Rousseau and K. Stergiou, Springer, 2019, Vol. 11494, p. 95-111Conference paper, Published paper (Refereed)
    Abstract [en]

    A black-box local-search backend to a solving-technology-independent modelling language, such as MiniZinc, automatically infers from the structure of a declarative model for a satisfaction or optimisation problem a combination of a neighbourhood, heuristic, and meta-heuristic. These ingredients are then provided to a local-search solver, but are manually designed in a handcrafted local-search algorithm. However, such a backend can perform poorly due to model structure that is inappropriate for local search, for example when it considers moves modifying only variables that represent auxiliary information. Towards overcoming such inefficiency, we propose compound-move generation, an extension to local-search solvers that uses a complete-search solver in order to augment moves modifying non-auxiliary variables so that they also modify auxiliary ones. Since compound-move generation is intended to be applied to such models, we discuss how to identify them.

    We present several refinements of compound-move generation and show its very positive impact on several third-party models. This helps reduce the unavoidable gap between black-box local search and local-search algorithms crafted by experts.

    Place, publisher, year, edition, pages
    Springer, 2019
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 11494
    National Category
    Computer Sciences
    Research subject
    Computing Science
    Identifiers
    urn:nbn:se:uu:diva-396494 (URN)10.1007/978-3-030-19212-9_7 (DOI)000884721600007 ()978-3-030-19212-9 (ISBN)
    Conference
    CPAIOR 2019, 4 – 7 June, Thessaloniki, Greece
    Funder
    Swedish Research Council, 2015-04910
    Available from: 2019-11-18 Created: 2019-11-18 Last updated: 2023-03-03Bibliographically approved
    3. Declarative local-search neighbourhoods in MiniZinc
    Open this publication in new window or tab >>Declarative local-search neighbourhoods in MiniZinc
    Show others...
    2018 (English)In: PROCEEDINGS OF THE 2018 IEEE 30TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), IEEE Computer Society, 2018, p. 98-105Conference paper, Published paper (Refereed)
    Abstract [en]

    The aim of solver-independent modelling is to create a model of a satisfaction or optimisation problem independent of a particular technology. This avoids early commitment to a solving technology and allows easy comparison of technologies. MiniZinc is a solver-independent modelling language, supported by CP, MIP, SAT, SMT, and constraint-based local search (CBLS) backends. Some technologies, in particular CP and CBLS, require not only a model but also a search strategy. While backends for these technologies offer default search strategies, it is often beneficial to include in a model a user-specified search strategy for a particular technology, especially if the strategy can encapsulate knowledge about the problem structure. This is complex since a local-search strategy (comprising a neighbourhood, a heuristic, and a meta-heuristic) is often tightly tied to the model. Hence we wish to use the same language for specifying the model and the local search. We show how to extend MiniZinc so that one can attach a fully declarative neighbourhood specification to a model, while maintaining the solver-independence of the language. We explain how to integrate a model-specific declarative neighbourhood with an existing CBLS backend for MiniZinc.

    Place, publisher, year, edition, pages
    IEEE Computer Society, 2018
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-368077 (URN)10.1109/ICTAI.2018.00025 (DOI)000457750200015 ()978-1-5386-7449-9 (ISBN)
    Conference
    30th IEEE International Conference on Tools with Artificial Intelligence (ICTAI), NOV 05-07, 2018, Volos, GREECE
    Funder
    Swedish Research Council, 2015-04910
    Available from: 2018-12-17 Created: 2018-12-03 Last updated: 2021-03-03Bibliographically approved
    4. Exploring declarative local-search neighbourhoods with constraint programming
    Open this publication in new window or tab >>Exploring declarative local-search neighbourhoods with constraint programming
    2019 (English)In: Principles and Practice of Constraint Programming / [ed] Thomas Schiex and Simon de Givry, Switzerland: Springer, 2019, Vol. 11802, p. 37-53Conference paper, Published paper (Refereed)
    Abstract [en]

    Using constraint programming (CP) to explore a local-search neighbourhood was first tried in the mid 1990s. The advantage is that constraint propagation can quickly rule out uninteresting neighbours, sometimes greatly reducing the number actually probed. However, a CP model of the neighbourhood has to be handcrafted from the model of the problem: this can be difficult and tedious. That research direction appears abandoned since large-neighbourhood search (LNS) and constraint-based local search (CBLS) arose as alternatives that seem easier to use. Recently, the notion of declarative neighbourhood was added to the technology-independent modelling language MiniZinc, for use by any backend to MiniZinc, but currently only used by a CBLS backend. We demonstrate that declarative neighbourhoods are indeed technology-independent by using the old idea of CP-based neighbourhood exploration: we explain how to encode automatically a declarative neighbourhood into a CP model of the neighbourhood. This enables us to lift any CP solver into a local-search backend to MiniZinc. Our prototype is competitive with CP, CBLS, and LNS backends to MiniZinc.

    Place, publisher, year, edition, pages
    Switzerland: Springer, 2019
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 11802
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-396813 (URN)10.1007/978-3-030-30048-7_3 (DOI)000560404200003 ()978-3-030-30047-0 (ISBN)978-3-030-30048-7 (ISBN)
    Conference
    25th International Conference, CP 2019, Stamford, CT, USA, September 30 – October 4, 2019
    Funder
    Swedish Research Council, 2015-04910
    Available from: 2019-11-18 Created: 2019-11-18 Last updated: 2021-03-03Bibliographically approved
    5. Solving Satisfaction Problems using Large-Neighbourhood Search
    Open this publication in new window or tab >>Solving Satisfaction Problems using Large-Neighbourhood Search
    Show others...
    2020 (English)In: Principles and Practice of Constraint Programming / [ed] Helmut Simonis, 2020, Vol. 12333, p. 55-71Conference paper, Published paper (Refereed)
    Abstract [en]

    Large-neighbourhood search (LNS) improves an initial solution, hence it is not directly applicable to satisfaction problems. In order to use LNS in a constraint programming (CP) framework to solve satisfaction problems, we usually soften some hard-to-satisfy constraints by replacing them with penalty-function constraints. LNS is then used to reduce their penalty to zero, thus satisfying the original problem. However, this can give poor performance as the penalties rarely cause propagation and therefore do not drive each CP search, and by extension the LNS search, towards satisfying the replaced constraints until very late. Our key observation is that entirely replacing a constraint is often overkill, as the propagator for the replaced constraint could have performed some propagation without causing backtracking. We propose the notion of a non-failing propagator, which is subsumed just before causing a backtrack. We show that, by only making a few changes to an existing CP solver, any propagator can be made non-failing without modifying its code. Experimental evaluation shows that non-failing propagators, when used in conjunction with penalties, can greatly improve LNS performance compared to just having penalties. This allows us to leverage the power of the many sophisticated propagators that already exist in CP solvers, in order to use LNS for solving hard satisfaction problems and for finding initial solutions to hard-to-satisfy optimisation problems.

    Series
    Lecture Notes in Computer Science, ISSN 1611-3349 ; 12333
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-422974 (URN)10.1007/978-3-030-58475-7_4 (DOI)978-3-030-58474-0 (ISBN)978-3-030-58475-7 (ISBN)
    Conference
    26th International Conference on Principles and Practice of Constraint Programming, Louvain-la-Neuve, Belgium, September 7 – September 11, 2020
    Funder
    Swedish Research Council, 2015-04910
    Available from: 2020-10-19 Created: 2020-10-19 Last updated: 2021-03-03Bibliographically approved
    Download full text (pdf)
    UUThesis-G_Björdal_2021
    Download (jpg)
    presentationsbild
  • 21.
    Björdal, Gustav
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    String Variables for Constraint-Based Local Search2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    String variables occur as a natural part of many computationally challenging problems. Usually, such problems are solved using problem-specific algorithms implemented from first principles, which can be a time-consuming and error-prone task. A constraint solver is a framework that can be used to solve computationally challenging problems by first declaratively defining the problem and then solving it using specialised off-the-shelf algorithms, which can cut down development time significantly and result in faster solution times and higher solution quality. There are many constraint solving technologies, one of which is constraint-based local search (CBLS). However, very few constraint solvers have native support for solving problems with string variables. The goal of this thesis is to add string variables as a native type to the CBLS solver OscaR/CBLS. The implementation was experimentally evaluated on the Closest String Problem and the Word Equation System problem. The evaluation shows that string variables for CBLS can be a viable option for solving string problems. However, further work is required to obtain even more competitive performance.

    Download full text (pdf)
    fulltext
  • 22.
    Björdal, Gustav
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Flener, Pierre
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Pearson, Justin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Generating compound moves in local search by hybridisation with complete search2019In: Integration of Constraint Programming, Artificial Intelligence, and Operations Research / [ed] L.-M. Rousseau and K. Stergiou, Springer, 2019, Vol. 11494, p. 95-111Conference paper (Refereed)
    Abstract [en]

    A black-box local-search backend to a solving-technology-independent modelling language, such as MiniZinc, automatically infers from the structure of a declarative model for a satisfaction or optimisation problem a combination of a neighbourhood, heuristic, and meta-heuristic. These ingredients are then provided to a local-search solver, but are manually designed in a handcrafted local-search algorithm. However, such a backend can perform poorly due to model structure that is inappropriate for local search, for example when it considers moves modifying only variables that represent auxiliary information. Towards overcoming such inefficiency, we propose compound-move generation, an extension to local-search solvers that uses a complete-search solver in order to augment moves modifying non-auxiliary variables so that they also modify auxiliary ones. Since compound-move generation is intended to be applied to such models, we discuss how to identify them.

    We present several refinements of compound-move generation and show its very positive impact on several third-party models. This helps reduce the unavoidable gap between black-box local search and local-search algorithms crafted by experts.

  • 23.
    Björdal, Gustav
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Flener, Pierre
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Pearson, Justin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Stuckey, Peter J.
    Faculty of Information Technology, Monash University, Melbourne, Australia.
    Exploring declarative local-search neighbourhoods with constraint programming2019In: Principles and Practice of Constraint Programming / [ed] Thomas Schiex and Simon de Givry, Switzerland: Springer, 2019, Vol. 11802, p. 37-53Conference paper (Refereed)
    Abstract [en]

    Using constraint programming (CP) to explore a local-search neighbourhood was first tried in the mid 1990s. The advantage is that constraint propagation can quickly rule out uninteresting neighbours, sometimes greatly reducing the number actually probed. However, a CP model of the neighbourhood has to be handcrafted from the model of the problem: this can be difficult and tedious. That research direction appears abandoned since large-neighbourhood search (LNS) and constraint-based local search (CBLS) arose as alternatives that seem easier to use. Recently, the notion of declarative neighbourhood was added to the technology-independent modelling language MiniZinc, for use by any backend to MiniZinc, but currently only used by a CBLS backend. We demonstrate that declarative neighbourhoods are indeed technology-independent by using the old idea of CP-based neighbourhood exploration: we explain how to encode automatically a declarative neighbourhood into a CP model of the neighbourhood. This enables us to lift any CP solver into a local-search backend to MiniZinc. Our prototype is competitive with CP, CBLS, and LNS backends to MiniZinc.

  • 24.
    Björklund, Henrik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Combinatorial Optimization for Infinite Games on Graphs2005Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Games on graphs have become an indispensable tool in modern computer science. They provide powerful and expressive models for numerous phenomena and are extensively used in computer- aided verification, automata theory, logic, complexity theory, computational biology, etc.

    The infinite games on finite graphs we study in this thesis have their primary applications in verification, but are also of fundamental importance from the complexity-theoretic point of view. They include parity, mean payoff, and simple stochastic games.

    We focus on solving graph games by using iterative strategy improvement and methods from linear programming and combinatorial optimization. To this end we consider old strategy evaluation functions, construct new ones, and show how all of them, due to their structural similarities, fit into a unifying combinatorial framework. This allows us to employ randomized optimization methods from combinatorial linear programming to solve the games in expected subexponential time.

    We introduce and study the concept of a controlled optimization problem, capturing the essential features of many graph games, and provide sufficent conditions for solvability of such problems in expected subexponential time.

    The discrete strategy evaluation function for mean payoff games we derive from the new controlled longest-shortest path problem, leads to improvement algorithms that are considerably more efficient than the previously known ones, and also improves the efficiency of algorithms for parity games.

    We also define the controlled linear programming problem, and show how the games are translated into this setting. Subclasses of the problem, more general than the games considered, are shown to belong to NP intersection coNP, or even to be solvable by subexponential algorithms.

    Finally, we take the first steps in investigating the fixed-parameter complexity of parity, Rabin, Streett, and Muller games.

    List of papers
    1. A discrete subexponential algorithm for parity games
    Open this publication in new window or tab >>A discrete subexponential algorithm for parity games
    2003 In: STACS 2003, 20th Annual Symposium on Theoretical Aspects of Computer Science, 2003, p. 663-674Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92509 (URN)3-540-00623-0 (ISBN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    2. Complexity of model checking by iterative improvement: the pseudo-Boolean framework
    Open this publication in new window or tab >>Complexity of model checking by iterative improvement: the pseudo-Boolean framework
    2003 In: Perspectives of Systems Informatics: 5th International Andrei Ershov Memorial Conference, 2003, p. 381-394Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92510 (URN)3-540-20813-5 (ISBN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    3. Memoryless determinacy of parity and mean payoff games: a simple proof
    Open this publication in new window or tab >>Memoryless determinacy of parity and mean payoff games: a simple proof
    2004 In: Theoretical Computer Science, ISSN 0304-3975, Vol. 310, p. 365-378Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-92511 (URN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    4. A combinatorial strongly subexponential algorithm for mean payoff games
    Open this publication in new window or tab >>A combinatorial strongly subexponential algorithm for mean payoff games
    2004 In: Mathematical Foundations of Computer Science 2004, 2004, p. 673-685Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92512 (URN)3-540-22823-3 (ISBN)
    Available from: 2005-01-18 Created: 2005-01-18Bibliographically approved
    5. The controlled linear programming problem: DIMACS Technical Report 2004-41, September 2004
    Open this publication in new window or tab >>The controlled linear programming problem: DIMACS Technical Report 2004-41, September 2004
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:uu:diva-92513 (URN)
    Available from: 2005-01-18 Created: 2005-01-18 Last updated: 2010-01-13Bibliographically approved
    Download full text (pdf)
    FULLTEXT01
    Download (pdf)
    COVER01
  • 25.
    Brandauer, Stephan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Structured Data2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    References are a programming language construct that lets a programmer access a datum invariant of its location.

    References permit aliasing -- several references to the same object, effectively making a single object accessible through different names (or paths). Aliasing, especially of mutable data, is both a blessing and a curse: when used correctly, it can make a programmer's life easier; when used incorrectly, for example through accidental aliases that the programmer is unaware of, aliasing can lead to hard to find bugs, and hard to verify programs.

    Aliases allow us to build efficient data structures by connecting objects together, making them immediately reachable. Aliases are at the heart of many useful programming idioms. But with great power comes great responsibility: unless a programmer carefully manages aliases in a program, aliases propagate changes and make parts of a program's memory change seemingly for no reason. Additionally, such bugs are very easy to make but very hard to track down.

    This thesis presents an overview of techniques for controlling how, when and if data can be aliased, as well as how and if data can be mutated. Additionally, it presents three different projects aimed at conserving the blessings, but reducing the curses. The first project is disjointness domains, a type system for expressing intended aliasing in a fine-grained manner so that aliasing will not be unexpected; the second project is Spencer, a tool to flexibly and precisely analyse the use of aliasing in programs to improve our understanding of how aliasing of mutable data is used in practise; and the third project is c flat, an approach for implementing high-level collection data structures using a richer reference construct that reduces aliasing problems but still retains many of aliasing's benefits.

    List of papers
    1. Disjointness Domains for Fine-Grained Aliasing
    Open this publication in new window or tab >>Disjointness Domains for Fine-Grained Aliasing
    2015 (English)Conference paper, Published paper (Refereed)
    Abstract [en]

    Aliasing is crucial for supporting useful implementation patterns, but it makes reasoning about programs difficult. To deal with this problem, numerous type-based aliasing control mechanisms have been proposed, expressing properties such as uniqueness. Uniqueness, however, is black-and-white: either a reference is unique or it can be arbitrarily aliased; and global: excluding aliases throughout the entire system, making code brittle to changing requirements. Disjointness domains, a new approach to alias control, address this problem by enabling more graduations between uniqueness and arbitrary reference sharing. They allow expressing aliasing constraints local to a certain set of variables (either stack variables or fields) for instance that no aliasing occurs between variables within some set of variables but between such sets or the opposite, that aliasing occurs within that set but not between different sets. A hierarchy of disjointness domains controls the flow of references through a program, helping the programmer reason about disjointness and enforce local alias invariants. The resulting system supports fine-grained control of aliasing between both variables and objects, making aliasing explicit to programmers, compilers, and tooling. This paper presents a formal account of disjointness domains along with examples. Disjointness domains provide novel means of expressing may-alias kinds of constraints, which may prove useful in compiler optimisation and verification.

    Series
    ACM SIGPLAN NOTICES, ISSN 0362-1340
    Keywords
    Design; Theory; Aliasing; mutable state; type systems; uniqueness; linear types
    National Category
    Electrical Engineering, Electronic Engineering, Information Engineering
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-268747 (URN)10.1145/2814270.2814280 (DOI)000367256500051 ()
    Conference
    ACM International Conference on Object Oriented Programming Systems Languages and Applications (OOPSLA)
    Available from: 2015-12-09 Created: 2015-12-09 Last updated: 2018-11-27Bibliographically approved
    2. Spencer: Interactive Heap Analysis for the Masses
    Open this publication in new window or tab >>Spencer: Interactive Heap Analysis for the Masses
    2017 (English)In: 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR 2017), IEEE, 2017, no 14, p. 113-123Conference paper, Published paper (Refereed)
    Abstract [en]

    Programming language-design and run-time-implementation require detailed knowledge about the programs that users want to implement. Acquiring this knowledge is hard, and there is little tool support to effectively estimate whether a proposed tradeoff actually makes sense in the context of real world applications. Ideally, knowledge about behaviour of "typical" programs is 1) easily obtainable, 2) easily reproducible, and 3) easily sharable. We present Spencer, an open source web service and API framework for dynamic analysis of a continuously growing set of traces of standard program corpora. Users do not obtain traces on their own, but can instead send queries to the web service that will be executed on a set of program traces. Queries are built in terms of a set of query combinators that present a high level interface for working with trace data. Since the framework is high level, and there is a hosted collection of recorded traces, queries are easy to implement. Since the data sets are shared by the research community, results are reproducible. Since the actual queries run on one (or many) servers that provide analysis as a service, obtaining results is possible on commodity hardware. Data in Spencer is meant to be obtained once, and analysed often, making the overhead of data collection mostly irrelevant. This allows Spencer to collect more data than traditional tracing tools can afford within their performance budget. Results in Spencer are cached, making complicated analyses that build on cached primitive queries speedy.

    Place, publisher, year, edition, pages
    IEEE, 2017
    Series
    IEEE International Working Conference on Mining Software Repositories, E-ISSN 2160-1852
    Keywords
    tracing, dynamic analysis, heap analysis, tracing
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-334818 (URN)10.1109/MSR.2017.35 (DOI)000425917100013 ()978-1-5386-1544-7 (ISBN)
    Conference
    IEEE/ACM 14th International Conference on Mining Software Repositories (MSR), Buenos Aires, Argentina, May 20-21, 2017
    Available from: 2017-11-28 Created: 2017-11-28 Last updated: 2018-11-27Bibliographically approved
    3. Mining for Safety using Interactive Trace Analysis
    Open this publication in new window or tab >>Mining for Safety using Interactive Trace Analysis
    2017 (English)In: Pre-Proceedings - Fifteenth International Workshop on Quantitative Aspects of Programming Languages and Systems, 2017, no 15, article id 14Conference paper, Published paper (Refereed)
    Keywords
    tracing, dynamic analysis, heap analysis, tracing
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-334817 (URN)
    Conference
    Fifteenth International Workshop on Quantitative Aspects of Programming Languages and Systems (QAPL)
    Available from: 2017-11-28 Created: 2017-11-28 Last updated: 2018-11-27Bibliographically approved
    4. C♭: A New Modular Approach to Implementing Efficient and Tunable Collections
    Open this publication in new window or tab >>C♭: A New Modular Approach to Implementing Efficient and Tunable Collections
    2018 (English)In: Proceedings of the 2018 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward! 2018), ACM , 2018, p. 57-71Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    ACM, 2018
    Keywords
    data structure design, domain specific language, performance tuning
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-366812 (URN)10.1145/3276954.3276956 (DOI)000455805700005 ()
    Conference
    SPLASH 2018 - Systems, Programming, Languages and Applications: Software for Humanity, Boston, 4-9 November, 2018
    Projects
    UPMARC2012-04867 Structured Aliasing
    Funder
    Swedish Research Council, 2012-04967
    Available from: 2018-11-26 Created: 2018-11-26 Last updated: 2019-12-13Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 26.
    Bylund, Markus
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Personal service environments: Openness and user control in user-service interaction2001Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis describes my work with making the whole experience of using electronic services more pleasant and practical. More and more people use electronic services in their daily life — be it services for communicating with colleagues or family members, web-based bookstores, or network-based games for entertainment. However, electronic services in general are more difficult to use than they would have to be. They are limited in how and when users can access them. Services do not collaborate despite obvious advantages to their users, and they put the integrity and privacy of their users at risk.

    In this thesis, I argue that there are structural reasons for these problems rather than problems with content or the technology per se. The focus when designing electronic services tends to be on the service providers or on the artifacts that are used for accessing the services. I present an approach that focus on the user instead, which is based on the concept of personal service environments. These provide a mobile locale for storing and running electronic services of individual users. This gives the user increased control over which services to use, from where they can be accessed, and what personal information that services gather. The concept allows, and encourages, service collaboration, but not without letting the user maintain the control over the process. Finally, personal service environments allow continuous usage of services while switching between interaction devices and moving between places.

    The sView system, which is also described, implements personal service environments and serves as an example of how the concept can be realized. The system consists of two parts. The first part is a specification of how both services for sView and infrastructure for handling services should be developed. The second part is a reference implementation of the specification, which includes sample services that adds to and demonstrates the functionality of sView.

    Download full text (pdf)
    fulltext
  • 27.
    Byström, Adam
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    From Intent to Code: Using Natural Language Processing2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Programming and the possibility to express one’s intent to a machine is becoming a very important skill in our digitalizing society. Today, instructing a machine, such as a computer to perform actions is done through programming. What if this could be done with human language? This thesis examines how new technologies and methods in the form of Natural Language Processing can be used to make programming more accessible by translating intent expressed in natural language into code that a computer can execute. Related research has studied using natural language as a programming language and using natural language to instruct robots. These studies have shown promising results but are hindered by strict syntaxes, limited domains and inability to handle ambiguity. Studies have also been made using Natural Language Processing to analyse source code, turning code into natural language. This thesis has the reversed approach. By utilizing Natural Language Processing techniques, an intent can be translated into code containing concepts such as sequential execution, loops and conditional statements. In this study, a system for converting intent, expressed in English sentences, into code is developed. To analyse this approach to programming, an evaluation framework is developed, evaluating the system during the development process as well as usage of the final system. The results show that this way of programming might have potential but conclude that the Natural Language Processing models still have too low accuracy. Further research is required to increase this accuracy to further assess the potential of this way of programming. 

    Download full text (pdf)
    fulltext
  • 28.
    Carlsson, Elin
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Mattsson, Moa
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    The MaRiQ model: A quantitative approach to risk management2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In recent years, cyber attacks and data fraud have become major issues to companies, businesses and nation states alike. The need for more accurate and reliable risk management models is therefore substantial.

    Today, cybersecurity risk management is often carried out on a qualitative basis, where risks are evaluated to a predefined set of categories such as low, medium or high. This thesis aims to challenge that practice, by presenting a model that quantitatively assesses risks - therefore named MaRiQ (Manage Risks Quantitatively).

    MaRiQ was developed based on collected requirements and contemporary literature on quantitative risk management. The model consists of a clearly defined flowchart and a supporting tool created in Excel. To generate scientifically validated results, MaRiQ makes use of a number of statistical techniques and mathematical functions, such as Monte Carlo simulations and probability distributions.

    To evaluate whether our developed model really was an improvement compared to current qualitative processes, we conducted a workshop at the end of the project. The organization that tested MaRiQexperienced the model to be useful and that it fulfilled most of their needs.

    Our results indicate that risk management within cybersecurity can and should be performed using more quantitative approaches than what is praxis today. Even though there are several potential developments to be made, MaRiQ demonstrates the possible advantages of transitioning from qualitative to quantitative risk management processes.

    Download full text (pdf)
    fulltext
  • 29.
    Carlsson, Per
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Algorithms for Electronic Power Markets2004Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis we focus resource allocation problems and electronic markets in particular. The main application area of ours is electricity markets. We present a number of algorithms and include practical experience.

    There is an ongoing restructuring of power markets in Europe and elsewhere, this implies that an industry that previously has been viewed as a natural monopoly becomes exposed to competition. In the thesis we move a step further suggesting that end users should take active part in the trade on power markets such as (i) day-ahead markets and (ii) markets handling close to real-time balancing of power grids. Our ideas and results can be utilised (a) to increase the efficiency of these markets and (b) to handle strained situations when power systems operate at their limits. For this we utilise information and communication technology available today and develop electronic market mechanisms designed for large numbers of participants typically distributed over a power grid.

    The papers of the thesis cover resource allocation with separable objective functions, a market mechanism that accepts actors with discontinuous demand, and mechanisms that allow actors to express combinatorial dependencies between traded commodities on multi-commodity markets. Further we present results from field tests and simulations.

    List of papers
    1. Resource Allocation With Wobbly Functions
    Open this publication in new window or tab >>Resource Allocation With Wobbly Functions
    2002 In: Computational Optimization and Applications, ISSN 0926-6003, Vol. 23, no 2, p. 171-200Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-92381 (URN)
    Available from: 2004-11-22 Created: 2004-11-22Bibliographically approved
    2. Extending Equilibrium Markets
    Open this publication in new window or tab >>Extending Equilibrium Markets
    2001 In: IEEE Intelligent Systems, ISSN 1094-7167, Vol. 16, no 4, p. 18-26Article in journal (Refereed) Published
    Identifiers
    urn:nbn:se:uu:diva-92382 (URN)
    Available from: 2004-11-22 Created: 2004-11-22Bibliographically approved
    3. Communication Test of Electronic Power Markets through Power Line Communication
    Open this publication in new window or tab >>Communication Test of Electronic Power Markets through Power Line Communication
    Chapter in book (Other academic) Published
    Identifiers
    urn:nbn:se:uu:diva-92383 (URN)
    Available from: 2004-11-22 Created: 2004-11-22Bibliographically approved
    4. A Tractable Mechanism for Time Dependent Markets
    Open this publication in new window or tab >>A Tractable Mechanism for Time Dependent Markets
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:uu:diva-92384 (URN)
    Available from: 2004-11-22 Created: 2004-11-22 Last updated: 2010-01-13Bibliographically approved
    5. A Flexible Model for Tree-Structured Multi-Commodity Markets
    Open this publication in new window or tab >>A Flexible Model for Tree-Structured Multi-Commodity Markets
    2007 (English)In: Electronic Commerce Research, ISSN 1389-5753, E-ISSN 1572-9362, Vol. 7, no 1, p. 69-88Article in journal (Refereed) Published
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-92385 (URN)10.1007/s10660-006-0063-Y (DOI)
    Available from: 2004-11-22 Created: 2004-11-22 Last updated: 2018-01-13Bibliographically approved
    6. Market Simulations
    Open this publication in new window or tab >>Market Simulations
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:uu:diva-92386 (URN)
    Available from: 2004-11-22 Created: 2004-11-22 Last updated: 2010-01-13Bibliographically approved
    Download full text (pdf)
    FULLTEXT01
  • 30.
    Carlsson, Per
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Market and resource allocation algorithms with application to energy control2001Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    The energy markets of today are markets with rather few active participants. The participants are, with few exceptions, large producers and distributors. The market mechanisms that are used are constructed with this kind of a market situation in mind. With an automatic or semiautomatic approach, the market mechanism would be able to incorporate a larger number of participants. Smaller producers, and even consumers, could take an active part in the market. The gain is in more efficient markets, and – due to smaller fluctuations in demand – better resource usage from an environmental perspective.

    The energy markets of the Nordic countries (as well as many others) were deregulated during the last few years. The change has been radical and the situation is still rather new. We believe that the market can be made more efficient with the help of the dynamics of the small actors.

    The idealised world of theory (of economics) often relies on assumptions such as continuous demand and supply curves. These assumptions are useful, and they do not introduce problems in the power market situation of today, with relatively few, large, participants. When consumers and small producers are introduced on the market, the situation is different. Then it is a drawback if the market mechanims cannot handle discontinuous supply and demand.

    The growth in accessibility to computational power and data communications that we have experienced in the last years (and are experiencing) could be utilised when constructing mechanisms for the energy markets of tomorrow.

    In this thesis we suggest a new market mechanism, ConFAst, that utilises the technological progress to make it possible to incorporate a large number of active participants on the market. The mechanism does not rely on the assumptions above. The gain is a more efficient market with less fluctuations in demand over the day.

    To make this possible there is a need for efficient algorithms, in particular this mechanism relies on an efficient aggregation algorithm. An algorithm for aggregation of objective functions is part of this thesis. The algorithm handles maximisation with nonconcave, even noisy, objective functions. Experimental results show that the approach, in practically relevant cases, is significantly faster than the standard algorithm.

    Download full text (pdf)
    fulltext
  • 31.
    Castegren, Elias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Capability-Based Type Systems for Concurrency Control2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Since the early 2000s, in order to keep up with the performance predictions of Moore's law, hardware vendors have had to turn to multi-core computers. Today, parallel hardware is everywhere, from massive server halls to the phones in our pockets. However, this parallelism does not come for free. Programs must explicitly be written to allow for concurrent execution, which adds complexity that is not present in sequential programs. In particular, if two concurrent processes share the same memory, care must be taken so that they do not overwrite each other's data. This issue of data-races is exacerbated in object-oriented languages, where shared memory in the form of aliasing is ubiquitous. Unfortunately, most mainstream programming languages were designed with sequential programming in mind, and therefore provide little or no support for handling this complexity. Even though programming abstractions like locks can be used to synchronise accesses to shared memory, the burden of using these abstractions correctly and efficiently is left to the programmer.

    The contribution of this thesis is programming language technology for controlling concurrency in the presence of shared memory. It is based on the concept of reference capabilities, which facilitate safe concurrent programming by restricting how memory may be accessed and shared. Reference capabilities can be used to enforce correct synchronisation when accessing shared memory, as well as to prevent unsafe sharing when using more fine-grained concurrency control, such as lock-free programming. This thesis presents the design of a capability-based type system with low annotation overhead, that can statically guarantee the absence of data-races without giving up object-oriented features like aliasing, subtyping and code reuse. The type system is formally proven safe, and has been implemented for the highly concurrent object-oriented programming language Encore.

    List of papers
    1. Reference Capabilities for Trait Based Reuse and Concurrency Control
    Open this publication in new window or tab >>Reference Capabilities for Trait Based Reuse and Concurrency Control
    2016 (English)Report (Other academic)
    Abstract [en]

    The proliferation of shared mutable state in object-orientedprogramming complicates software development as two seeminglyunrelated operations may interact via an alias and produceunexpected results. In concurrent programming this manifestsitself as data-races.

    Concurrent object-oriented programming further suffers from thefact that code that warrants synchronisation cannot easily bedistinguished from code that does not. The burden is placed solelyon the programmer to reason about alias freedom, sharing acrossthreads and side-effects to deduce where and when to applyconcurrency control, without inadvertently blocking parallelism.

    This paper presents a reference capability approach to concurrentand parallel object-oriented programming where all uses of aliasesare guaranteed to be data-race free. The static type of an aliasdescribes its possible sharing without using explicit ownership oreffect annotations. Type information can express non-interferingdeterministic parallelism without dynamic concurrency control,thread-locality, lock-based schemes, and guarded-by relationsgiving multi-object atomicity to nested data structures.Unification of capabilities and traits allows trait-based reuseacross multiple concurrency scenarios with minimal codeduplication. The resulting system brings together features from awide range of prior work in a unified way.

    Series
    Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2016-007
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-309774 (URN)
    Projects
    Structured AliasingUpscaleUPMARC
    Available from: 2016-12-07 Created: 2016-12-07 Last updated: 2019-12-13Bibliographically approved
    2. Kappa: Insights, Current Status and Future Work
    Open this publication in new window or tab >>Kappa: Insights, Current Status and Future Work
    2016 (English)Conference paper, Oral presentation with published abstract (Refereed)
    Abstract [en]

    KAPPA is a type system for safe concurrent object-oriented program- ming using reference capabilities. It uses a combination of static and dynamic techniques to guarantee data-race freedom, and, for a certain subset of the system, non-interference (and thereby determin- istic parallelism). It combines many features from previous work on alias management, such as substructural types, regions, ownership types, and fractional permissions, and brings them together using a unified set of primitives.

    In this extended abstract we show how KAPPA’s capabilities express variations of the aforementioned concepts, discuss the main insights from working with KAPPA, present the current status of the implementation of KAPPA in the context of the actor language Encore, and discuss ongoing and future work. 

    Keywords
    Type systems, Language Implementation, Capabilities, Traits, Concurrency, Object-Oriented
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-307137 (URN)
    Conference
    IWACO
    Projects
    Structured AliasingUpscaleUPMARC
    Funder
    EU, FP7, Seventh Framework Programme, FP7-612985
    Available from: 2016-12-07 Created: 2016-11-09 Last updated: 2019-12-13Bibliographically approved
    3. Types for CAS: Relaxed Linearity with Ownership Transfer
    Open this publication in new window or tab >>Types for CAS: Relaxed Linearity with Ownership Transfer
    2017 (English)Manuscript (preprint) (Other academic)
    Abstract [en]

    Linear references are guaranteed to be free from aliases. This is a strong property that simplifies reasoning about programs and enables powerful optimisations, but it is also a property that is too strong for many applications. Notably, lock-free algorithms, which implement protocols that ensure safe, non-blocking concurrent access to data structures, are generally not typable with linear references because they rely on aliasing to achieve lock-freedom.

    This paper presents LOLCAT, a type system with a relaxed notion of linearity that allows an unbounded number of aliases to an object as long as at most one alias at a time owns the right to access the contents of the object. This ownership can be transferred between aliases, but can never be duplicated. LOLCAT types are powerful enough to type several lock-free data structures and give a compile-time guarantee of absence of data-races when accessing owned data. In particular, LOLCAT is able to assign types to the CAS (compare and swap) primitive that precisely describe how ownership is transferred across aliases, possibly across different threads.

    The paper introduces LOLCAT through a sound core procedural calculus, and shows how LOLCAT can be applied to three fundamental lock-free data structures. It also shows how LOLCAT can be used to implement synchronisation primitives like locks, and discusses a prototype implementation which integrates LOLCAT with an object-oriented programming language.

    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-336019 (URN)
    Available from: 2017-12-11 Created: 2017-12-11 Last updated: 2023-10-26Bibliographically approved
    4. Bestow and Atomic: Concurrent programming using isolation, delegation and grouping
    Open this publication in new window or tab >>Bestow and Atomic: Concurrent programming using isolation, delegation and grouping
    2018 (English)In: The Journal of logical and algebraic methods in programming, ISSN 2352-2208, E-ISSN 2352-2216, Vol. 100, p. 130-151Article in journal (Refereed) Published
    Abstract [en]

    Any non-trivial concurrent system warrants synchronisation, regardless of the concurrency model. Actor-based concurrency serialises all computations in an actor through asynchronous message passing. In contrast, lock-based concurrency serialises some computations by following a lock-unlock protocol for accessing certain data. Both systems require sound reasoning about pointers and aliasing to exclude data-races. If actor isolation is broken, so is the single-thread-of-control abstraction. Similarly for locks, if a datum is accessible outside of the scope of the lock, the datum is not governed by the lock. In this paper we discuss how to balance aliasing and synchronisation. In previous work, we defined a type system that guarantees data-race freedom of actor-based concurrency and lock-based concurrency. This paper extends this work by the introduction of two programming constructs; one for decoupling isolation and synchronisation and one for constructing higher-level atomicity guarantees from lower-level synchronisation. We focus predominantly on actors, and in particular the Encore programming language, but our ultimate goal is to define our constructs in such a way that they can be used both with locks and actors, given that combinations of both models occur frequently in actual systems. We discuss the design space, provide several formalisations of different semantics and discuss their properties, and connect them to case studies showing how our proposed constructs can be useful. We also report on an on-going implementation of our proposed constructs in Encore. 

    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-336020 (URN)10.1016/j.jlamp.2018.06.007 (DOI)000444363000008 ()
    Projects
    UPMARC
    Funder
    EU, FP7, Seventh Framework Programme, 612985Swedish Research Council, 2012-4967
    Available from: 2018-06-30 Created: 2017-12-11 Last updated: 2023-10-31Bibliographically approved
    5. OOlong: An Extensible Concurrent Object Calculus
    Open this publication in new window or tab >>OOlong: An Extensible Concurrent Object Calculus
    2018 (English)In: SAC '18: Proceedings of the 33rd Annual ACM Symposium on Applied Computing, 2018, p. 1022-1029Conference paper, Published paper (Refereed)
    Abstract [en]

    We present OOlong, an object calculus with interface inheritance, structured concurrency and locks. The goal of the calculus is extensibility and reuse. The semantics are therefore available in a version for LaTeX typesetting (written in Ott), and a mechanised version for doing rigorous proofs in Coq.

    Keywords
    Object Calculi, Semantics, Mechanisation, Concurrency
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-335174 (URN)10.1145/3167132.3167243 (DOI)000455180700147 ()978-1-4503-5191-1 (ISBN)
    Conference
    33rd Annual ACM Symposium on Applied Computing (ACM SAC), Pau, France, April 9–13, 2018
    Note

    Book Group Author(s): Assoc Comp Machinery

    Available from: 2017-12-01 Created: 2017-12-01 Last updated: 2022-09-01Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (jpg)
    preview image
  • 32.
    Castegren, Elias
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Wrigstad, Tobias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Encore: Coda2023In: Active Object Languages: Current Research Trends, Springer Nature, 2023Chapter in book (Refereed)
    Abstract [en]

    Encore is a programming language that was developed between 2014  and 2019. Encore was designed following the principle of  inversion of defaults: computations are concurrent (rather than  sequential) by default; data is isolated (rather than freely  sharable) by default. The language worked as a seedbed for a  large number of research ideas aimed at making programming with  active objects safe, expressive and efficient.

    Encore allows active objects to share data but statically  ensures the absence of data races and allows fully concurrent  garbage collection. Active objects can synchronize using  first-class futures, which are also used to delegate and  coalesce computations across active objects. The type system  also supports orchestration of intra-object parallelism,  expressed using composable units of computation. Active objects  which see a lot of traffic can turn themselves into passive  objects protected by lock-free synchronization mechanisms to  avoid performance bottle-necks, while still facilitating safe  sharing and concurrent garbage collection.

    This paper gives an overview of these features of Encore,  reflecting on lessons learned from trying to fit all of these  research ideas into a single language.

  • 33. Cheeseman, Luke
    et al.
    Parkinson, Matthew J.
    Clebsch, Sylvan
    Kogias, Marios
    Drossopoulou, Sophia
    Chisnall, David
    Wrigstad, Tobias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Liétar, Paul
    When Concurrency Matters: Behaviour-Oriented Concurrency2023In: Proceedings of the ACM on Programming Languages, E-ISSN 2475-1421, Vol. 7, no OOPSLA2, p. 1531-1560Article in journal (Refereed)
    Abstract [en]

    Expressing parallelism and coordination is central for modern concurrent programming. Many mechanisms exist for expressing both parallelism and coordination. However, the design decisions for these two mechanisms are tightly intertwined. We believe that the interdependence of these two mechanisms should be recognised and achieved through a single, powerful primitive. We are not the first to realise this: the prime example is actor model programming, where parallelism arises through fine-grained decomposition of a program’s state into actors that are able to execute independently in parallel. However, actor model programming has a serious pain point: updating multiple actors as a single atomic operation is a challenging task. We address this pain point by introducing a new concurrency paradigm: Behaviour-Oriented Concurrency (BoC). In BoC, we are revisiting the fundamental concept of a behaviour to provide a more transactional concurrency model. BoC enables asynchronously creating atomic and ordered units of work with exclusive access to a collection of independent resources. In this paper, we describe BoC informally in terms of examples, which demonstrate the advantages of exclusive access to several independent resources, as well as the need for ordering. We define it through a formal model. We demonstrate its practicality by implementing a C++ runtime. We argue its applicability through the Savina benchmark suite: benchmarks in this suite can be more compactly represented using BoC in place of Actors, and we observe comparable, if not better, performance.

    Download full text (pdf)
    fulltext
  • 34.
    Dagfalk, Johanna
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Kyhle, Ellen
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Listening in on Productivity: Applying the Four Key Metrics to measure productivity in a software development company2021Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Software development is an area in which companies not only need to keep up with the latest technology, but they additionally need to continuously increase their productivity to stay competitive in the industry. One company currently facing these challenges is Storytel - one of the strongest players on the Swedish audiobook market - with about a fourth of all employees involved with software development, and a rapidly growing workforce.

    With the purpose of understanding how the Storytel Tech Department is performing, this thesis maps Storytel’s productivity defined through the Four Key Metrics - Deployment Frequency, Delivery Lead Time, Mean Time To Restore and Change Fail Rate. A classification is made into which performance category (Low, Medium, High, Elite) the Storytel Tech Department belongs to through a deep-dive into the raw system data existing at Storytel, mainly focusing on the case management system Jira. A survey of the Tech Department was conducted, to give insights into the connection between human and technical factors influencing productivity (categorized into Culture, Environment, and Process) and estimated productivity. Along with these data collections, interviews with Storytel employees were performed to gather further knowledge about the Tech Department, and to understand potential bottlenecks and obstacles.

    All Four Key Metrics could be determined based on raw system data, except the metric Mean Time To Restore which was complemented by survey estimates. The generalized findings of the Four Key Metrics conclude that Storytel can be minimally classified as a ‘medium’ performer. The factors, validated through factor analysis, found to have an impact on the Four Key Metrics were Generative Culture, Efficiency (Automation and Shared Responsibility) and Number of Projects. Lastly, the major bottlenecks found were related to Architecture, Automation, Time Fragmentation and Communication.

    The thesis contributes with interesting findings from an expanding, middle-sized, healthy company in the audiobook streaming industry - but the results can be beneficial for other software development companies to learn from as well. Performing a similar study with a greater sample size, and additionally enabling comparisons between teams, is suggested for future research.

    Download full text (pdf)
    fulltext
  • 35.
    Dahlin, Niklas
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Optimering av takttidtabell för järnvägstrafik på en regional nivå: En fallstudie av fyra Mälarstäder2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Effective commuting is an important part of regional development and attractiveness, where railway traffic is a favourable mode of transportation owing to it being energy efficient and environmentally friendly. Attaining more people to choose the train to commute is therefore desirable. A concept aiming to increase the use of railway traffic is cyclic timetable. At present the concept is most frequently used on a national level but there are possibilities to implement the ideas on a more regional level.

    The purpose of this thesis is to study if and how a cyclic timetable for railway traffic can be constructed and optimised for a region, more specifically the region “Fyra Mälarstäder”. Challenges and opportunities to implement this type of timetable on a regional level are also discussed.

    In order to construct a timetable for railway traffic several infrastructural limitations must be taken into account. An example that extensively limits railway capacity is single tracks. Hence, to be able to construct and optimise the timetable these limitations were formulated, together with a number of other criteria, mathematically as constraints for an optimisation problem. For the optimisation setup the objective function consisted of a sum of weighted trip times within the system, which in turn was minimised.

    Results conclude that a cyclic timetable could successfully be used for “Fyra Mälarstäder”. However, some aspects remain to be investigated, including train line continuation beyond the system boundaries of the study. As for the optimisation, it appears that the weighting of the objective function plays a considerable role to obtain a satisfying timetable. Varying and adjusting certain parameters may also be favourable to achieve a timetable as beneficial as possible.

    Download full text (pdf)
    fulltext
  • 36.
    Daniels, Mats
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Education Research. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Pears, Arnold
    Dept of Learning in Engineering Sciences, KTH Royal Institute of Technology, Stockholm, Sweden.
    Nylén, Aletta
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Education Research. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Inspiring Computational Thinking: A Science Fair Activity2021In: 2021 IEEE Frontiers in Education Conference (FIE), Institute of Electrical and Electronics Engineers (IEEE), 2021Conference paper (Refereed)
    Abstract [en]

    This Research Full Paper introduces a model for structuring playful, challenge-based learning activities drawing on a process of decontextualisation of a challenge to access its Computational Thinking intellectual and conceptual components and subsequent computationalisation of these components into computational artifacts that are recontextualised to render them attractive and accessible to school pupils. The case study follows a thick description approach to evaluating the engagement potential of the instructional design, as well as the didactic choices made during implementation. We conclude that Bebras cards and Dash robots provide considerable support for a playful engagement with computational concepts and engaging children with different scientific backgrounds in Computational Thinking. In particular, flexibility in how they are used and easy adaptation of challenge level make them useful in contexts with broad participation. Additionally we find that using robots to provide a link between the theoretical presentation of CT in the Bebras card and a physical representation and programming challenge is engaging and helps participants to focus on algorithmic concepts.

  • 37.
    Demmelmaier, Gustav
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Westerberg, Carl
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Data Segmentation Using NLP: Gender and Age2021Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Natural language processing (NLP) opens the possibilities for a computer to read, decipher, and interpret human languages to eventually use it in ways that enable yet further understanding of the interaction and communication between the human and the computer. When appropriate data is available, NLP makes it possible to determine not only the sentiment information of a text but also information about the author behind an online post. Previously conducted studies show aspects of NLP potentially going deeper into the subjective information, enabling author classification from text data.

    This thesis addresses the lack of demographic insights of online user data by studying language use in texts. It compares four popular yet diverse machine learning algorithms for gender and age segmentation. During the project, the age analysis was abandoned due to insufficient data. The online texts were analysed and quantified into 118 parameters based on linguistic differences. Using supervised learning, the researchers succeeded in correctly predicting the gender in 82% of the cases when analysing data from English online users. The training and test data may have some correlations, which is important to notice. Language is complex and, in this case, the more complex methods SVM and Neural networks were performing better than the less complex Naive Bayes and Logistic regression.

    Download full text (pdf)
    fulltext
  • 38.
    Edqvist, Samuel
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Scheduling Physicians using Constraint Programming2008Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Download full text (pdf)
    FULLTEXT02
  • 39.
    Elvander, Adam
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Developing a Recommender System for a Mobile E-commerce Application2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis describes the process of conceptualizing and developing a recommendersystem for a peer-to-peer commerce application. The application in question is calledPlick and is a vintage clothes marketplace where private persons and smaller vintageretailers buy and sell secondhand clothes from each other. Recommender systems is arelatively young field of research but has become more popular in recent years withthe advent of big data applications such as Netflix and Amazon. Examples ofrecommender systems being used in e-marketplace applications are however stillsparse and the main contribution of this thesis is insight into this sub-problem inrecommender system research. The three main families of recommender algorithmsare analyzed and two of them are deemed unfitting for the e-marketplace scenario.Out of the third family, collaborative filtering, three algorithms are described,implemented and tested on a large subset of data collected in Plick that consistsmainly of clicks made by users on items in the system. By using both traditional andnovel evaluation techniques it is further shown that a user-based collaborative filteringalgorithm yields the most accurate recommendations when compared to actual userbehavior. This represents a divergence from recommender systems commonly usedin e-commerce applications. The paper concludes with a discussion on the cause andsignificance of this difference and the impact of certain data-preprocessing techniqueson the results.

    Download full text (pdf)
    fulltext
  • 40.
    Engvall, Maja
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Data-driven cost management for a cloud-based service across autonomous teams2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Spotify started to use the cloud-based data warehouse BigQuery in 2016 with a business model of pay-as-you-go. Since then, usage has increased rapidly in volume and amount of users across the organisation which is a result of ease of use compared to previous data warehouse solutions. The technology procurement team lacks an overview of how BigQuery is used across Spotify and a strategy of how to maintain an environment where users make cost informed decisions when

    designing queries and creating tables. Incidents resulting in unexpected high bills are currently handled by a capacity analyst using billing data which is lacking the granularity of how cost maps to the users of BigQuery.

    The objective of this research is to provide recommendations on how audit data can enable a data driven cost-effective environment for BigQuery across the matrix formed engineering organisation at Spotify. First an overview of the current usage patterns is presented based on audit data which is modeled with regards to volume, complexity and utilization. Different patterns are identified using K-means

    clustering, including high-volume consuming squads and underutilized tables. Secondly, recommendations on transparency of audit data for cost-effectiveness are based on insights from cluster analysis, interviews and characteristics of organisation structure.

    Recommendations include transparency of data consumption to producers to prevent paying for unused resources and transparency to consumers on usage patterns to avoid paying for unexpected bills. Usage growth is recommended to be presented to the technology procurement squad which enables better understanding and mitigates challenges on cost forecasting and control.

    Download full text (pdf)
    fulltext
  • 41.
    Eriksson, Benjamin
    et al.
    Chalmers Univ Technol, Gothenburg, Sweden..
    Stjerna, Amanda
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    De Masellis, Riccardo
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Rümmer, Philipp
    Univ Regensburg, Regensburg, Germany..
    Sabelfeld, Andrei
    Chalmers Univ Technol, Gothenburg, Sweden..
    Black Ostrich: Web Application Scanning with String Solvers2023In: CCS '23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security / [ed] Weizhi Meng; Christian D. Jensen; Cas Cremers; Engin Kirda, Association for Computing Machinery (ACM), 2023, p. 549-563Conference paper (Refereed)
    Abstract [en]

    Securing web applications remains a pressing challenge. Unfortunately, the state of the art in web crawling and security scanning still falls short of deep crawling. A major roadblock is the crawlers' limited ability to pass input validation checks when web applications require data of a certain format, such as email, phone number, or zip code. This paper develops Black Ostrich, a principled approach to deep web crawling and scanning. The key idea is to equip web crawling with string constraint solving capabilities to dynamically infer suitable inputs from regular expression patterns in web applications and thereby pass input validation checks. To enable this use of constraint solvers, we develop new automata-based techniques to process JavaScript regular expressions. We implement our approach extending and combining the Ostrich constraint solver with the Black Widow web crawler. We evaluate Black Ostrich on a set of 8,820 unique validation patterns gathered from over 21,667,978 forms from a combination of the July 2021 Common Crawl and Tranco top 100K. For these forms and reconstructions of input elements corresponding to the patterns, we demonstrate that Black Ostrich achieves a 99% coverage of the form validations compared to an average of 36% for the state-of-the-art scanners. Moreover, out of the 66,377 domains using these patterns, we solve all patterns on 66,309 (99%) while the combined efforts of the other scanners cover 52,632 (79%). We further show that our approach can boost coverage by evaluating it on three open-source applications. Our empirical studies include a study of email validation patterns, where we find that 213 (26%) out of the 825 found email validation patterns liberally admit XSS injection payloads.

    Download full text (pdf)
    FULLTEXT01
  • 42.
    Eriksson, Lars-Henrik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Verification and generation of geographical data using domain theory2005In: TRain Workshop at the 3rd IEEE International Conference on Software Engineering and Formal Methods (SEFM’05), 2005, 2005Conference paper (Other academic)
    Download full text (pdf)
    Extended abstract
  • 43.
    Fernandez-Reyes, Kiko
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Gariano, Isaac Oscar
    Noble, James
    Greenwood-Thessman, Erin
    Homer, Michael
    Wrigstad, Tobias
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Dala: a simple capability-based dynamic language design for data race-freedom2021In: Onward! 2021: Proceedings of the 2021 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software / [ed] Wolfgang De Meuter, Elisa Baniassad, 2021, p. 1-17Conference paper (Refereed)
  • 44.
    Fernández Reyes, Francisco Ramón
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Abstractions to Control the Future2021Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Multicore and manycore computers are the norm nowadays, and users have expectations that their programs can do multiple things concurrently. To support that, developers use concur- rency abstractions such as threads, promises, futures, and/or channels to exchange information. All these abstractions introduce trade-offs between the concurrency model and the language guarantees, and developers accept these trade-offs for the benefits of concurrent programming.

    Many concurrent languages are multi-paradigm, e.g., mix the functional and object-oriented paradigms. This is beneficial to developers because they can choose the most suitable approach when solving a problem. From the point of view of concurrency, purely functional programming languages are data-race free since they only support immutable data. Object-oriented languages do not get a free lunch, and neither do multi-paradigm languages that have imperative features.

    The main problem is uncontrolled concurrent access to shared mutable state, which may inadvertently introduce data-races. A data-race happens when two concurrent memory operations target the same location, at least one of them is a write, and there is no synchronisation operation involved. Data-races make programs to exhibit (unwanted) non-deterministic behaviour.

    The contribution of this thesis is two-fold. First, this thesis introduces new concurrent abstractions in a purely functional, statically typed programming language (Paper I – Paper III); these abstractions allow developers to write concurrent control- and delegation-based patterns. Second, this thesis introduces a capability-based dynamic programming model, named Dala, that extends the applicability of the concurrent abstractions to an imperative setting while maintaining data-race freedom (Paper IV). Developers can also use the Dala model to migrate unsafe programs, i.e., programs that may suffer data-races, to data-race free programs.

    List of papers
    1. ParT: An asynchronous parallel abstraction for speculative pipeline computations
    Open this publication in new window or tab >>ParT: An asynchronous parallel abstraction for speculative pipeline computations
    2016 (English)In: Coordination Models and Languages / [ed] Lafuente, AL; Proenca, J, Springer, 2016, p. 101-120Conference paper, Published paper (Refereed)
    Abstract [en]

    The ubiquity of multicore computers has forced programming language designers to rethink how languages express parallelism and concurrency. This has resulted in new language constructs and new combinations or revisions of existing constructs. In this line, we extended the programming languages Encore (actor-based), and Clojure (functional) with an asynchronous parallel abstraction called ParT, a data structure that can dually be seen as a collection of asynchronous values (integrating with futures) or a handle to a parallel computation, plus a collection of combinators for manipulating the data structure. The combinators can express parallel pipelines and speculative parallelism. This paper presents a typed calculus capturing the essence of ParT, abstracting away from details of the Encore and Clojure programming languages. The calculus includes tasks, futures, and combinators similar to those of Orc but implemented in a non-blocking fashion. Furthermore, the calculus strongly mimics how ParT is implemented, and it can serve as the basis for adaptation of ParT into different languages and for further extensions.

    Place, publisher, year, edition, pages
    Springer, 2016
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 9686
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:uu:diva-309764 (URN)10.1007/978-3-319-39519-7_7 (DOI)000388794200007 ()9783319395180 (ISBN)
    Conference
    COORDINATION 2016
    Projects
    UpScaleUPMARC
    Funder
    EU, FP7, Seventh Framework Programme, FP7-612985
    Available from: 2016-05-24 Created: 2016-12-07 Last updated: 2020-11-18Bibliographically approved
    2. Forward to a Promising Future
    Open this publication in new window or tab >>Forward to a Promising Future
    2018 (English)In: Conference proceedings COORDINATION 2018, 2018Conference paper, Published paper (Refereed)
    Abstract [en]

    In many actor-based programming models, asynchronous method calls communicate their results using futures, where the fulfilment occurs under-the-hood. Promises play a similar role to futures, except that they must be explicitly created and explicitly fulfilled; this makes promises more flexible than futures, though promises lack fulfilment guarantees: they can be fulfilled once, multiple times or not at all. Unfortunately, futures are too rigid to exploit many available concurrent and parallel patterns. For instance, many computations block on a future to get its result only to return that result immediately (to fulfil their own future). To make futures more flexible, we explore a construct, forward, that delegates the responsibility for fulfilling the current implicit future to another computation. Forward reduces synchronisation and gives futures promise-like capabilities. This paper presents a formalisation of the forward construct, defined in a high-level source language, and a compilation strategy from the high-level language to a low-level, promised-based target language. The translation is shown to preserve semantics. Based on this foundation, we describe the implementation of forward in the parallel, actor-based language Encore, which compiles to C.

    Keywords
    parallel, concurrency, futures, actors, tasks
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:uu:diva-351352 (URN)
    Conference
    COORDINATION - 20th International Conference on Coordination Models and Languages, Madrid, June 18-21, 2018.
    Available from: 2018-05-23 Created: 2018-05-23 Last updated: 2020-11-18Bibliographically approved
    3. Godot: All the Benefits of Implicit and Explicit Futures
    Open this publication in new window or tab >>Godot: All the Benefits of Implicit and Explicit Futures
    Show others...
    2019 (English)In: 33rd European Conference on Object-Oriented Programming (ECOOP 2019), 2019Conference paper, Published paper (Refereed)
    Abstract [en]

    Concurrent programs often make use of futures, handles to the results of asynchronous operations. Futures provide means to communicate not yet computed results, and simplify the implementation of operations that synchronise on the result of such asynchronous operations. Futures can be characterised as implicit or explicit, depending on the typing discipline used to type them. Current future implementations suffer from "future proliferation", either at the type-level or at run-time. The former adds future type wrappers, which hinders subtype polymorphism and exposes the client to the internal asynchronous communication architecture. The latter increases latency, by traversing nested future structures at run-time. Many languages suffer both kinds. Previous work offer partial solutions to the future proliferation problems; in this paper we show how these solutions can be integrated in an elegant and coherent way, which is more expressive than either system in isolation. We describe our proposal formally, and state and prove its key properties, in two related calculi, based on the two possible families of future constructs (data-flow futures and control-flow futures). The former relies on static type information to avoid unwanted future creation, and the latter uses an algebraic data type with dynamic checks. We also discuss how to implement our new system efficiently.

    Series
    Leibniz International Proceedings in Informatics (LIPIcs), ISSN 1868-8969 ; 134
    National Category
    Computer Systems
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-396199 (URN)10.4230/LIPIcs.ECOOP.2019.2 (DOI)978-3-95977-111-5 (ISBN)
    Conference
    33rd European Conference on Object-Oriented Programming (ECOOP 2019), London, UK, 15-19 july 2019
    Available from: 2019-10-31 Created: 2019-10-31 Last updated: 2020-11-18Bibliographically approved
    4. Dala: A Simple Capability-Based Dynamic Language Design For Data-Race Freedom
    Open this publication in new window or tab >>Dala: A Simple Capability-Based Dynamic Language Design For Data-Race Freedom
    Show others...
    (English)Manuscript (preprint) (Other academic)
    Abstract [en]

    Dynamic languages like Erlang, Clojure, JavaScript, and E adopted data-race freedom by design. To enforce data-race freedom, these languages either deep copy objects during actor (thread) communication or proxy back to their owning thread. We present Dala, a simple programming model that ensures data-race freedom while supporting efficient inter-thread communication. Dala is a dynamic, concurrent, capability-based language that relies on three core capa- bilities: immutable values can be shared freely; isolated mutable objects can be transferred between threads but not aliased; local objects can be aliased within their owning thread but not dereferenced by other threads. The addition of a fourth capability, unsafe, allows data races and we show how data-race free programs interact with unsafe programs. We present a formal model of Dala, prove data race-freedom and the dynamic gradual guarantee. These theorems guarantee data race-freedom when using safe capabilities and show that the addition of capabili- ties is semantics preserving modulo permission and cast errors.

    National Category
    Computer Systems
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:uu:diva-425127 (URN)
    Available from: 2020-11-12 Created: 2020-11-12 Last updated: 2021-01-29Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 45.
    Fomkin, Ruslan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Optimization and Execution of Complex Scientific Queries2009Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Large volumes of data produced and shared within scientific communities are analyzed by many researchers to investigate different scientific theories. Currently the analyses are implemented in traditional programming languages such as C++. This is inefficient for research productivity, since it is difficult to write, understand, and modify such programs. Furthermore, programs should scale over large data volumes and analysis complexity, which further complicates code development.

    This Thesis investigates the use of database technologies to implement scientific applications, in which data are complex objects describing measurements of independent events and the analyses are selections of events by applying conjunctions of complex numerical filters on each object separately. An example of such an application is analyses for the presence of Higgs bosons in collision events produced by the ATLAS experiment. For efficient implementation of such an ATLAS application, a new data stream management system SQISLE is developed. In SQISLE queries are specified over complex objects which are efficiently streamed from sources through the query engine. This streaming approach is compared with the conventional approach to load events into a database before querying. Since the queries implementing scientific analyses are large and complex, novel techniques are developed for efficient query processing. To obtain efficient plans for such queries SQISLE implements runtime query optimization strategies, which during query execution collect runtime statistics for a query, reoptimize the query using the collected statistics, and dynamically switch optimization strategies. The cost-based optimization utilizes a novel cost model for aggregate functions over nested subqueries. To alleviate estimation errors in large queries the fragments are decomposed into conjunctions of subqueries over which runtime statistics are measured. Performance is further improved by query transformation, view materialization, and partial evaluation. ATLAS queries in SQISLE using these query processing techniques perform close to or better than hard-coded C++ implementations of the same analyses.

    Scientific data are often stored in Grids, which manage both storage and computational resources. This Thesis includes a framework POQSEC that utilizes Grid resources to scale scientific queries over large data volumes by parallelizing the queries and shipping the data management system itself, e.g. SQISLE, to Grid computational nodes for the parallel query execution.

    Download full text (pdf)
    FULLTEXT01
  • 46.
    Forghani, Kamran
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Carlsson, Mats
    Rise.
    Flener, Pierre
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Fredriksson, Magnus
    Luleå tekniska universitet, Träteknik.
    Pearson, Justin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Yuan, Di
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Maximizing Value Yield in Wood Industry through Flexible Sawing and Product Grading Based on Wane and Log Shape2024In: Computers and Electronics in Agriculture, ISSN 0168-1699, E-ISSN 1872-7107, Vol. 216, article id 108513Article in journal (Refereed)
    Abstract [en]

    The optimization of sawing processes in the wood industry is critical for maximizing efficiency and profitability. The introduction of computerized tomography scanners provides sawmill operators with three-dimensional internal models of logs, which can be used to assess value and yield more accurately. We present a methodology for solving the sawing optimization problem employing a flexible sawing scheme that allows greater flexibility in cutting logs into products while considering product quality classes influenced by wane defects. The methodology has two phases: preprocessing and optimization. In the preprocessing phase, two alternative algorithms are given that generate and evaluate the potential sawing positions of products by considering the 3D surface of the log, product size requirements, and product quality classes. In the optimization phase, a maximum set-packing problem is solved for the preprocessed data using mixed-integer programming (MIP), aiming to obtain a feasible cut pattern that maximizes value yield. This is implemented in a system named FlexSaw, which takes advantage of parallel computation during the preprocessing phase and utilizes a MIP solver during the optimization phase. The proposed sawing methods are evaluated on the Swedish Pine Stem Bank. Additionally, FlexSaw is compared with an existing tool that utilizes cant sawing. Results demonstrate the superiority of flexible sawing. While the practical feasibility of implementing a flexible way of sawing logs is constrained by the limitations of current sawmill machinery, the potential increase in yield promotes the exploration of alternative machinery in the wood industry.

  • 47.
    Francisco Rodríguez, María Andreína
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computing Science.
    Analysis, synthesis and application of automaton-based constraint descriptions2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Constraint programming (CP) is a technology in which a combinatorial problem is modelled as a conjunction of constraints on variables ranging over given initial domains, and optionally an objective function on the variables. Such a model is given to a general-purpose solver performing systematic search to find constraint-satisfying domain values for the variables, giving an optimal value to the objective function. A constraint predicate (also known as a global constraint) does two things: from the modelling perspective, it allows a modeller to express a commonly occurring combinatorial substructure, for example that a set of variables must take distinct values; from the solving perspective, it comes with a propagation algorithm, called a propagator, which removes some but not necessarily all impossible values from the current domains of its variables when invoked during search.

    Although modern CP solvers have many constraint predicates, often a predicate one would like to use is not available. In the past, the choices were either to reformulate the model or to write one's own propagator. In this dissertation, we contribute to the automatic design of propagators for new predicates.

    Integer time series are often subject to constraints on the aggregation of the features of all maximal occurrences of some pattern. For example, the minimum width of the peaks may be constrained. Automata allow many constraint predicates for variable sequences, and in particular many time-series predicates, to be described in a high-level way. Our first contribution is an algorithm for generating an automaton-based predicate description from a pattern, a feature, and an aggregator.

    It has previously been shown how to decompose an automaton-described constraint on a variable sequence into a conjunction of constraints whose predicates have existing propagators. This conjunction provides the propagation, but it is unknown how to propagate it efficiently. Our second contribution is a tool for deriving, in an off-line process, implied constraints for automaton-induced constraint decompositions to improve propagation. Further, when a constraint predicate functionally determines a result variable that is unchanged under reversal of a variable sequence, we provide as our third contribution an algorithm for deriving an implied constraint between the result variables for a variable sequence, a prefix thereof, and the corresponding suffix.

    List of papers
    1. Automatic generation of descriptions of time-series constraints
    Open this publication in new window or tab >>Automatic generation of descriptions of time-series constraints
    2017 (English)In: IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), IEEE Computer Society, 2017, p. 102-109Conference paper, Published paper (Refereed)
    Abstract [en]

    Integer time series are often subject to constraints on the aggregation of the features of all occurrences of some pattern within the series. For example, the number of inflexions may be constrained, or the sum of the peak maxima, or the minimum of the valley widths. Many time-series constraints can be described by transducers. The output alphabet of such a transducer consists of symbols that denote the phases of identifying the maximal occurrences of a pattern. It was recently shown how to synthesise automatically a constraint propagator and a constraint checker from such a transducer, which however has to be designed manually from a pattern. Here we define a large class of patterns, present an algorithm for automatically generating a low-level transducer from such a high-level pattern, and prove it correct. This class covers all 20 patterns of the Time-Series Constraint Catalogue, which can now be automatically extended at will.

    Place, publisher, year, edition, pages
    IEEE Computer Society, 2017
    Series
    Proceedings-International Conference on Tools With Artificial Intelligence, ISSN 1082-3409
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-332145 (URN)10.1109/ICTAI.2017.00027 (DOI)000435294700016 ()978-1-5386-3876-7 (ISBN)
    Conference
    IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), Boston, MA, November 6–8, 2017
    Funder
    Swedish Research Council, 2012-4908Swedish Research Council, 2015-0491
    Available from: 2018-06-07 Created: 2017-10-24 Last updated: 2019-08-28Bibliographically approved
    2. Generation of implied constraints for automaton-induced decompositions
    Open this publication in new window or tab >>Generation of implied constraints for automaton-induced decompositions
    2013 (English)In: Proc. 25th International Conference on Tools with Artificial Intelligence, IEEE Computer Society, 2013, p. 1076-1083Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    IEEE Computer Society, 2013
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-218040 (URN)10.1109/ICTAI.2013.160 (DOI)978-1-4799-2971-9 (ISBN)
    Conference
    ICTAI 2013, November 4-6, Herndon, VA
    Available from: 2014-02-06 Created: 2014-02-06 Last updated: 2018-01-11Bibliographically approved
    3. Implied constraints for AUTOMATON constraints
    Open this publication in new window or tab >>Implied constraints for AUTOMATON constraints
    2015 (English)In: Global Conference on Artificial Intelligence: GCAI 2015, Manchester, UK: Cool Press , 2015, p. 113-126Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Manchester, UK: Cool Press, 2015
    Series
    EasyChair Proceedings in Computing, ISSN 2040-557X ; 36
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-268653 (URN)
    Conference
    GCAI 2015, October 16–18, Tbilisi, Georgia
    Funder
    Swedish Research Council
    Available from: 2015-12-18 Created: 2015-12-09 Last updated: 2018-01-10Bibliographically approved
    4. Time-series constraints: Improvements and application in CP and MIP contexts
    Open this publication in new window or tab >>Time-series constraints: Improvements and application in CP and MIP contexts
    Show others...
    2016 (English)In: Integration of AI and OR Techniques in Constraint Programming, Springer, 2016, p. 18-34Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Springer, 2016
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 9676
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-298447 (URN)10.1007/978-3-319-33954-2_2 (DOI)000378993700002 ()978-3-319-33953-5 (ISBN)
    Conference
    CPAIOR 2016, May 29 – June 1, Banff, Canada
    Available from: 2016-05-12 Created: 2016-07-04 Last updated: 2018-01-10Bibliographically approved
    5. Linking prefixes and suffixes for constraints encoded using automata with accumulators
    Open this publication in new window or tab >>Linking prefixes and suffixes for constraints encoded using automata with accumulators
    Show others...
    2014 (English)In: Principles and Practice of Constraint Programming: CP 2014, Springer, 2014, p. 142-157Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Springer, 2014
    Series
    Lecture Notes in Computer Science ; 8656
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-238534 (URN)10.1007/978-3-319-10428-7_13 (DOI)978-3-319-10427-0 (ISBN)
    Conference
    CP 2014, September 8–12, Lyon, France
    Funder
    Swedish Research Council
    Available from: 2014-09-12 Created: 2014-12-13 Last updated: 2018-01-11Bibliographically approved
    6. Systematic derivation of bounds and glue constraints for time-series constraints
    Open this publication in new window or tab >>Systematic derivation of bounds and glue constraints for time-series constraints
    Show others...
    2016 (English)In: Principles and Practice of Constraint Programming: CP 2016, Springer, 2016, p. 13-29Conference paper, Published paper (Refereed)
    Place, publisher, year, edition, pages
    Springer, 2016
    Series
    Lecture Notes in Computer Science ; 9892
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:uu:diva-302048 (URN)10.1007/978-3-319-44953-1_2 (DOI)000389019700003 ()978-3-319-44952-4 (ISBN)
    Conference
    CP 2016, September 5–9, Toulouse, France
    Funder
    Swedish Research Council, 2011-6133Swedish Research Council, 2012-4908
    Available from: 2016-08-23 Created: 2016-08-29 Last updated: 2018-01-10Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (pdf)
    errata
    Download (jpg)
    presentationsbild
  • 48.
    Fransson, Moa
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Fåhraeus, Lisa
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Finding Patterns in Vehicle Diagnostic Trouble Codes: A data mining study applying associative classification2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In Scania vehicles, Diagnostic Trouble Codes (DTCs) are collected while driving, later on loaded into a central database when visiting a workshop. These DTCs are statistically used to analyse vehicles’ health statuses, which is why correctness in data is desirable. In workshops DTCs can however occur due to work and tests. Nevertheless are they loaded into the database without any notification. In order to perform an accurate analysis of the vehicle health status it would be desirable if such DTCs could be found and removed. The thesis has examined if this is possible by searching for patterns in DTCs, indicating whether the DTCs are generated in a workshop or not. Due to its easy interpretable outcome an Associative Classification method was used with the aim of categorising data. The classifier was built applying well-known algorithms and then two classification algorithms were developed to fit the data structure when labelling new data. The final classifier performed with an accuracy above 80 percent where no distinctive differences between the two algorithms could be found. Hardly 50 percent of all workshop DTCs were however found. The conclusion is that either do patterns in workshop DTCs only occur in 50 percent of the cases, or the classifier can only detect 50 percent of them. The patterns found could confirm previous knowledge regarding workshop generated DTCs as well as provide Scania with new information. 

    Download full text (pdf)
    Finding Patterns in Vehicle Diagnostic Trouble Codes
  • 49.
    Fredriksson, Susanne
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Auktorisation och ackreditering inom Försvarsmakten: En studie i nyttan av en standardiserad process för att hantera informationssäkerhet2011Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Information Technology is an essential part of the society today, not least in large organisations dealing with sensitive information. An example of such an organisation is the Swedish Armed Forces which indeed is in the need of ways to ensure information security in their Information Technology systems. The means which is used is an authorisation and accreditation process.

    All Information Technology systems go through a life cycle which includes realisation, usage, development and liquidation. In the Swedish Armed Forces the lifecycle is an authorisation process. Each step in the process is followed by an authorisation decision and one of these steps is accreditation. Accreditation is a formal approval to put the system in operation.

    The aim of the thesis is to study how large organisations may ensure information security when developing IT and the importance of a standardised process to handle information security. The study has been carried out by comparing the process of the Swedish Armed Forces with other methods to run projects and theories concerning development of IT.

    Interviews with Information Security Consultants at Combitech AB along with a study of documentation have been carried out in order to study the process. The theoretical framework has been formed through a literature study of project models and methods for development of IT.

    The main result of the thesis is that a standardised process which manage quality, communication, traceability, complexity, aim, operation and liquidation plan, risk assessment as well as use case increases the chance of a successful project and the achievement of information security in development of new IT. High quality in the formation of the security aim and the specification of requirements, along with tests to establish that all requirements are fulfilled, are crucial when it comes to information security.

    Download full text (pdf)
    FULLTEXT01
  • 50.
    Frisberg, Olle
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Placement support for signal intelligence units2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The goal of this thesis was to develop an optimization model that automatically finds optimal groupings for signal intelligence units, aiming to maximize surveillance capability in a user-defined target area. Consideration was taken to transportation possibilities, type of terrain, and the requirement of radio communication between the direction finders. Three scenarios were tested, each providing its own topographical challenges. Several different derivative-free optimization methods were implemented and evaluated, including global methods to find approximate groupings using a geometrical model that was developed, and the local method pattern search that was using the already existing model. Particle swarm and a genetic algorithm turned out to be the best global solvers. A grouping found by the global method was later improved by pattern search by evaluating possible groupings nearby. The greatest practical challenge for particle swarm and pattern search was the ability to find feasible placement points given a desired direction and step length. Workarounds were developed, allowing for more dynamic search patterns. For future use, the placement support should be tested on more scenarios with different prerequisites, and the approved terrain types have to be adjusted according to the kind of vehicle carrying the direction finder.

    Download full text (pdf)
    Placement_support_for_signal_intelligence_units
123 1 - 50 of 140
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf