uu.seUppsala universitets publikationer
Ändra sökning
Avgränsa sökresultatet
12 1 - 50 av 57
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
Markera alla
  • 1.
    Adwent, Ann-Kristin
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Shared IT Service Center i kommuner2012Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    To be able to maintain an adequate IT

    service for various users and needs,

    more municipalities are looking at the

    possibility for mergers of local IT

    departments. One solution for merging

    multiple services/functions and creating

    synergy opportunities is called Shared

    Service Center (SSC). The concept of SSC

    is that the administrative service

    itself becomes a core activity within

    the organization. The aim of this thesis

    is to provide municipalities who are

    considering a merging of their local IT

    departments with recommendations on how

    to design the Shared IT Service Center.

    Recommendations are outlined based on an

    analysis of IT-management literature,

    reports and by conducting a study on

    three ongoing collaborations.

    The conclusions drawn from the study

    suggest guidelines for the design of a

    Shared IT Service Center for

    municipalities. These are as following:

    Create a vision associated with a

    specific and structured target state.

    Identify needs for different target

    groups in municipalities and set a

    common standard.

    Create a clear and practical model/SLA

    appearance of the cooperation and

    agreement.

    Ensure the individual municipalities

    commissioning skills in order to not

    lose it in the context of a common IT

    operation.

    Find an organization that is democratic

    for member municipalities and

    facilitates long-term partnership.

    Specify the operation and maintenance so

    that it can be regulated and controlled

    Establish a common help desk.

    Establish a common standard and

    consolidated infrastructure before the

    introduction of a common technology platform.

  • 2.
    Andrejev, Andrej
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Semantic Web Queries over Scientific Data2016Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Semantic Web and Linked Open Data provide a potential platform for interoperability of scientific data, offering a flexible model for providing machine-readable and queryable metadata. However, RDF and SPARQL gained limited adoption within the scientific community, mainly due to the lack of support for managing massive numeric data, along with certain other important features – such as extensibility with user-defined functions, query modularity, and integration with existing environments and workflows.

    We present the design, implementation and evaluation of Scientific SPARQL – a language for querying data and metadata combined, represented using the RDF graph model extended with numeric multidimensional arrays as node values – RDF with Arrays. The techniques used to store RDF with Arrays in a scalable way and process Scientific SPARQL queries and updates are implemented in our prototype software – Scientific SPARQL Database Manager, SSDM, and its integrations with data storage systems and computational frameworks. This includes scalable storage solutions for numeric multidimensional arrays and an efficient implementation of array operations. The arrays can be physically stored in a variety of external storage systems, including files, relational databases, and specialized array data stores, using our Array Storage Extensibility Interface. Whenever possible SSDM accumulates array operations and accesses array contents in a lazy fashion.

    In scientific applications numeric computations are often used for filtering or post-processing the retrieved data, which can be expressed in a functional way. Scientific SPARQL allows expressing common query sub-tasks with functions defined as parameterized queries. This becomes especially useful along with functional language abstractions such as lexical closures and second-order functions, e.g. array mappers.

    Existing computational libraries can be interfaced and invoked from Scientific SPARQL queries as foreign functions. Cost estimates and alternative evaluation directions may be specified, aiding the construction of better execution plans. Costly array processing, e.g. filtering and aggregation, is thus preformed on the server, saving the amount of communication. Furthermore, common supported operations are delegated to the array storage back-ends, according to their capabilities. Both expressivity and performance of Scientific SPARQL are evaluated on a real-world example, and further performance tests are run using our mini-benchmark for array queries.

  • 3.
    Astrid, Berghult
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    A practical comparison between algebraic and statistical attacks on the lightweight cipher SIMON2016Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In the summer of 2013 NSA released a new family of lightweight block ciphers called SIMON. However they did not publish any assessment of the security of SIMON. Since then only a few papers on this topic have been released and none of them have included an algebraic analysis. Moreover only one paper described a practical implementation of the attack. This master thesis aims to implement a practical attack, both algebraic and differential, on SIMON. In doing so we are able to make a comparison between the two different attack methods. The algebraic attack was executed with SAT-solver CryptoMiniSat2 and could break 7 rounds. The differential attack was implemented in three steps. First we created a difference distribution table (DDT) and then we identified a differential by a search algorithm for the DDT. In the last step we designed a key recovery attack to recover the last round key. The attack could break 13 rounds for a 9 round differential. With a simple expansion on the key recovery attack it has the potential to break even more rounds for the same 9 round differential. This indicate that algebraic cryptanalysis might not be such a strong tool since it could only break 7 rounds. Furthermore, if a generic algebraic attack does not work on SIMON it has little or no chance of being successful on a more complex cipher. In other words this algebraic attack may serve as a benchmark for the efficiency of generic algebraic attacks.

  • 4.
    Backlund, Ludvig
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    A technical overview of distributed ledger technologies in the Nordic capital market.2016Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    This thesis examines how Distributed Ledger Technologies (DLTs) could be utilized in capital markets in general and in the Nordic capital market in particular. DLTs were introduced with the so called cryptocurrency Bitcoin in 2009 and has in the last few years been of interest to various financial institutions as a means to streamline financial processes. By combining computer scientific concepts such as public-key cryptography and consensus algorithms DLTs make it possible to keep shared databases with limited trust among the participators and without the use of a trusted third party. In this thesis various actors on the Nordic capital market were interviewed and their stance on DLTs were summarized. In addition to this a Proof of Concept of a permissioned DLT application for ownership registration of securities was constructed. It was found that all the interviewees were generally optimistic about DLTs potential to increase the efficiency of capital markets. The technology needs to be adopted to handle the capital markets demand for privacy and large transaction volumes, but there is a general agreement among the interviewees that these issues will be solved. The biggest challenge for an adoption of DLTs seem to lie in that of finding a common industry-wide standard.

  • 5.
    Badiozamany, Sobhan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Real-time data stream clustering over sliding windows2016Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    In many applications, e.g. urban traffic monitoring, stock trading, and industrial sensor data monitoring, clustering algorithms are applied on data streams in real-time to find current patterns. Here, sliding windows are commonly used as they capture concept drift.

    Real-time clustering over sliding windows is early detection of continuously evolving clusters as soon as they occur in the stream, which requires efficient maintenance of cluster memberships that change as windows slide.

    Data stream management systems (DSMSs) provide high-level query languages for searching and analyzing streaming data. In this thesis we extend a DSMS with a real-time data stream clustering framework called Generic 2-phase Continuous Summarization framework (G2CS).  G2CS modularizes data stream clustering by taking as input clustering algorithms which are expressed in terms of a number of functions and indexing structures. G2CS supports real-time clustering by efficient window sliding mechanism and algorithm transparent indexing. A particular challenge for real-time detection of a high number of rapidly evolving clusters is efficiency of window slides for clustering algorithms where deletion of expired data is not supported, e.g. BIRCH. To that end, G2CS includes a novel window maintenance mechanism called Sliding Binary Merge (SBM). To further improve real-time sliding performance, G2CS uses generation-based multi-dimensional indexing where indexing structures suitable for the clustering algorithms can be plugged-in.

    Delarbeten
    1. Scalable ordered indexing of streaming data
    Öppna denna publikation i ny flik eller fönster >>Scalable ordered indexing of streaming data
    2012 (Engelska)Ingår i: 3rd International Workshop on Accelerating Data Management Systems using Modern Processor and Storage Architectures, 2012, 11- s.Konferensbidrag (Refereegranskat)
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-185068 (URN)
    Konferens
    ADMS 2012, Istanbul, Turkey
    Projekt
    eSSENCE
    Tillgänglig från: 2012-08-27 Skapad: 2012-11-19 Senast uppdaterad: 2016-09-09Bibliografiskt granskad
    2. Grand challenge: Implementation by frequently emitting parallel windows and user-defined aggregate functions
    Öppna denna publikation i ny flik eller fönster >>Grand challenge: Implementation by frequently emitting parallel windows and user-defined aggregate functions
    Visa övriga...
    2013 (Engelska)Ingår i: Proc. 7th ACM International Conference on Distributed Event-Based Systems, New York: ACM Press, 2013, 325-330 s.Konferensbidrag (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    New York: ACM Press, 2013
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-211954 (URN)10.1145/2488222.2488284 (DOI)978-1-4503-1758-0 (ISBN)
    Externt samarbete:
    Konferens
    DEBS 2013
    Tillgänglig från: 2013-06-29 Skapad: 2013-12-03 Senast uppdaterad: 2016-09-09Bibliografiskt granskad
    3. Distributed multi-query optimization of continuous clustering queries
    Öppna denna publikation i ny flik eller fönster >>Distributed multi-query optimization of continuous clustering queries
    2014 (Engelska)Ingår i: Proc. VLDB 2014 PhD Workshop, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    This work addresses the problem of sharing execution plans for queries that continuously cluster streaming data to provide an evolving summary of the data stream. This is challenging since clustering is an expensive task, there might be many clustering queries running simultaneously, each continuous query has a long life time span, and the execution plans often overlap. Clustering is similar to conventional grouped aggregation but cluster formation is more expensive than group formation, which makes incremental maintenance more challenging. The goal of this work is to minimize response time of continuous clustering queries with limited resources through multi-query optimization. To that end, strategies for sharing execution plans between continuous clustering queries are investigated and the architecture of a system is outlined that optimizes the processing of multiple such queries. Since there are many clustering algorithms, the system should be extensible to easily incorporate user defined clustering algorithms.

    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-302790 (URN)
    Externt samarbete:
    Konferens
    VLDB 2014
    Tillgänglig från: 2016-09-09 Skapad: 2016-09-09 Senast uppdaterad: 2016-09-09Bibliografiskt granskad
    4. Framework for real-time clustering over sliding windows
    Öppna denna publikation i ny flik eller fönster >>Framework for real-time clustering over sliding windows
    2016 (Engelska)Ingår i: Proc. 28th International Conference on Scientific and Statistical Database Management, New York: ACM Press, 2016, 1-13 s., 19Konferensbidrag (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    New York: ACM Press, 2016
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-302792 (URN)10.1145/2949689.2949696 (DOI)978-1-4503-4215-5 (ISBN)
    Externt samarbete:
    Konferens
    SSDBM 2016
    Tillgänglig från: 2016-07-18 Skapad: 2016-09-09 Senast uppdaterad: 2016-09-12Bibliografiskt granskad
  • 6.
    Bernström, Kristian
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Näsman, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Utredning och implementering av en prototyp för integration av Prevas FOCS och ABB 800xA2014Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    ABB and Prevas have initiated a collaboration to sell a system to optimize steel industry furnaces, called FOCS. The purpose of this thesis is to investigate possibilities for integrating Prevas FOCS and ABB 800xA.

    The result of the investigation is used for an implementation of a prototype of the integrated system. The study shows a general method that can be used when implementing two software systems. The prototype of the integrated systems is made with usability principles in mind. This is a very important aspect in order to create a good working environment for the operators of a steel plant. It is also important to follow communication standards when integrating software systems. In an industrial network used in the steel industry OPC is a standard for communication. We recommend ABB and Prevas to follow this standard when possible to make the integration smoother. To keep the cost of the integration to a minimum it is also recommended to reuse existing resources. This can however have a negative effect on usability and it is therefore important to keep a balance between cost and usability.

    The prototype made in this thesis accomplishes the goal of transferring the functionalities used by operators of Prevas FOCS to 800xA so that operators can control the processes using only one integrated system.

  • 7.
    Björdal, Gustav
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    String Variables for Constraint-Based Local Search2016Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    String variables occur as a natural part of many computationally challenging problems. Usually, such problems are solved using problem-specific algorithms implemented from first principles, which can be a time-consuming and error-prone task. A constraint solver is a framework that can be used to solve computationally challenging problems by first declaratively defining the problem and then solving it using specialised off-the-shelf algorithms, which can cut down development time significantly and result in faster solution times and higher solution quality. There are many constraint solving technologies, one of which is constraint-based local search (CBLS). However, very few constraint solvers have native support for solving problems with string variables. The goal of this thesis is to add string variables as a native type to the CBLS solver OscaR/CBLS. The implementation was experimentally evaluated on the Closest String Problem and the Word Equation System problem. The evaluation shows that string variables for CBLS can be a viable option for solving string problems. However, further work is required to obtain even more competitive performance.

  • 8.
    Björklund, Henrik
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Combinatorial Optimization for Infinite Games on Graphs2005Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Games on graphs have become an indispensable tool in modern computer science. They provide powerful and expressive models for numerous phenomena and are extensively used in computer- aided verification, automata theory, logic, complexity theory, computational biology, etc.

    The infinite games on finite graphs we study in this thesis have their primary applications in verification, but are also of fundamental importance from the complexity-theoretic point of view. They include parity, mean payoff, and simple stochastic games.

    We focus on solving graph games by using iterative strategy improvement and methods from linear programming and combinatorial optimization. To this end we consider old strategy evaluation functions, construct new ones, and show how all of them, due to their structural similarities, fit into a unifying combinatorial framework. This allows us to employ randomized optimization methods from combinatorial linear programming to solve the games in expected subexponential time.

    We introduce and study the concept of a controlled optimization problem, capturing the essential features of many graph games, and provide sufficent conditions for solvability of such problems in expected subexponential time.

    The discrete strategy evaluation function for mean payoff games we derive from the new controlled longest-shortest path problem, leads to improvement algorithms that are considerably more efficient than the previously known ones, and also improves the efficiency of algorithms for parity games.

    We also define the controlled linear programming problem, and show how the games are translated into this setting. Subclasses of the problem, more general than the games considered, are shown to belong to NP intersection coNP, or even to be solvable by subexponential algorithms.

    Finally, we take the first steps in investigating the fixed-parameter complexity of parity, Rabin, Streett, and Muller games.

    Delarbeten
    1. A discrete subexponential algorithm for parity games
    Öppna denna publikation i ny flik eller fönster >>A discrete subexponential algorithm for parity games
    2003 Ingår i: STACS 2003, 20th Annual Symposium on Theoretical Aspects of Computer Science, 2003, 663-674 s.Kapitel i bok, del av antologi (Övrigt vetenskapligt) Published
    Identifikatorer
    urn:nbn:se:uu:diva-92509 (URN)3-540-00623-0 (ISBN)
    Tillgänglig från: 2005-01-18 Skapad: 2005-01-18Bibliografiskt granskad
    2. Complexity of model checking by iterative improvement: the pseudo-Boolean framework
    Öppna denna publikation i ny flik eller fönster >>Complexity of model checking by iterative improvement: the pseudo-Boolean framework
    2003 Ingår i: Perspectives of Systems Informatics: 5th International Andrei Ershov Memorial Conference, 2003, 381-394 s.Kapitel i bok, del av antologi (Övrigt vetenskapligt) Published
    Identifikatorer
    urn:nbn:se:uu:diva-92510 (URN)3-540-20813-5 (ISBN)
    Tillgänglig från: 2005-01-18 Skapad: 2005-01-18Bibliografiskt granskad
    3. Memoryless determinacy of parity and mean payoff games: a simple proof
    Öppna denna publikation i ny flik eller fönster >>Memoryless determinacy of parity and mean payoff games: a simple proof
    2004 Ingår i: Theoretical Computer Science, ISSN 0304-3975, Vol. 310, 365-378 s.Artikel i tidskrift (Refereegranskat) Published
    Identifikatorer
    urn:nbn:se:uu:diva-92511 (URN)
    Tillgänglig från: 2005-01-18 Skapad: 2005-01-18Bibliografiskt granskad
    4. A combinatorial strongly subexponential algorithm for mean payoff games
    Öppna denna publikation i ny flik eller fönster >>A combinatorial strongly subexponential algorithm for mean payoff games
    2004 Ingår i: Mathematical Foundations of Computer Science 2004, 2004, 673-685 s.Kapitel i bok, del av antologi (Övrigt vetenskapligt) Published
    Identifikatorer
    urn:nbn:se:uu:diva-92512 (URN)3-540-22823-3 (ISBN)
    Tillgänglig från: 2005-01-18 Skapad: 2005-01-18Bibliografiskt granskad
    5. The controlled linear programming problem: DIMACS Technical Report 2004-41, September 2004
    Öppna denna publikation i ny flik eller fönster >>The controlled linear programming problem: DIMACS Technical Report 2004-41, September 2004
    Manuskript (Övrigt vetenskapligt)
    Identifikatorer
    urn:nbn:se:uu:diva-92513 (URN)
    Tillgänglig från: 2005-01-18 Skapad: 2005-01-18 Senast uppdaterad: 2010-01-13Bibliografiskt granskad
  • 9.
    Bylund, Markus
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Personal service environments: Openness and user control in user-service interaction2001Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    This thesis describes my work with making the whole experience of using electronic services more pleasant and practical. More and more people use electronic services in their daily life — be it services for communicating with colleagues or family members, web-based bookstores, or network-based games for entertainment. However, electronic services in general are more difficult to use than they would have to be. They are limited in how and when users can access them. Services do not collaborate despite obvious advantages to their users, and they put the integrity and privacy of their users at risk.

    In this thesis, I argue that there are structural reasons for these problems rather than problems with content or the technology per se. The focus when designing electronic services tends to be on the service providers or on the artifacts that are used for accessing the services. I present an approach that focus on the user instead, which is based on the concept of personal service environments. These provide a mobile locale for storing and running electronic services of individual users. This gives the user increased control over which services to use, from where they can be accessed, and what personal information that services gather. The concept allows, and encourages, service collaboration, but not without letting the user maintain the control over the process. Finally, personal service environments allow continuous usage of services while switching between interaction devices and moving between places.

    The sView system, which is also described, implements personal service environments and serves as an example of how the concept can be realized. The system consists of two parts. The first part is a specification of how both services for sView and infrastructure for handling services should be developed. The second part is a reference implementation of the specification, which includes sample services that adds to and demonstrates the functionality of sView.

  • 10.
    Carlsson, Per
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Market and resource allocation algorithms with application to energy control2001Licentiatavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    The energy markets of today are markets with rather few active participants. The participants are, with few exceptions, large producers and distributors. The market mechanisms that are used are constructed with this kind of a market situation in mind. With an automatic or semiautomatic approach, the market mechanism would be able to incorporate a larger number of participants. Smaller producers, and even consumers, could take an active part in the market. The gain is in more efficient markets, and – due to smaller fluctuations in demand – better resource usage from an environmental perspective.

    The energy markets of the Nordic countries (as well as many others) were deregulated during the last few years. The change has been radical and the situation is still rather new. We believe that the market can be made more efficient with the help of the dynamics of the small actors.

    The idealised world of theory (of economics) often relies on assumptions such as continuous demand and supply curves. These assumptions are useful, and they do not introduce problems in the power market situation of today, with relatively few, large, participants. When consumers and small producers are introduced on the market, the situation is different. Then it is a drawback if the market mechanims cannot handle discontinuous supply and demand.

    The growth in accessibility to computational power and data communications that we have experienced in the last years (and are experiencing) could be utilised when constructing mechanisms for the energy markets of tomorrow.

    In this thesis we suggest a new market mechanism, ConFAst, that utilises the technological progress to make it possible to incorporate a large number of active participants on the market. The mechanism does not rely on the assumptions above. The gain is a more efficient market with less fluctuations in demand over the day.

    To make this possible there is a need for efficient algorithms, in particular this mechanism relies on an efficient aggregation algorithm. An algorithm for aggregation of objective functions is part of this thesis. The algorithm handles maximisation with nonconcave, even noisy, objective functions. Experimental results show that the approach, in practically relevant cases, is significantly faster than the standard algorithm.

  • 11.
    Carlsson, Per
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Algorithms for Electronic Power Markets2004Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    In this thesis we focus resource allocation problems and electronic markets in particular. The main application area of ours is electricity markets. We present a number of algorithms and include practical experience.

    There is an ongoing restructuring of power markets in Europe and elsewhere, this implies that an industry that previously has been viewed as a natural monopoly becomes exposed to competition. In the thesis we move a step further suggesting that end users should take active part in the trade on power markets such as (i) day-ahead markets and (ii) markets handling close to real-time balancing of power grids. Our ideas and results can be utilised (a) to increase the efficiency of these markets and (b) to handle strained situations when power systems operate at their limits. For this we utilise information and communication technology available today and develop electronic market mechanisms designed for large numbers of participants typically distributed over a power grid.

    The papers of the thesis cover resource allocation with separable objective functions, a market mechanism that accepts actors with discontinuous demand, and mechanisms that allow actors to express combinatorial dependencies between traded commodities on multi-commodity markets. Further we present results from field tests and simulations.

    Delarbeten
    1. Resource Allocation With Wobbly Functions
    Öppna denna publikation i ny flik eller fönster >>Resource Allocation With Wobbly Functions
    2002 Ingår i: Computational Optimization and Applications, ISSN 0926-6003, Vol. 23, nr 2, 171-200 s.Artikel i tidskrift (Refereegranskat) Published
    Identifikatorer
    urn:nbn:se:uu:diva-92381 (URN)
    Tillgänglig från: 2004-11-22 Skapad: 2004-11-22Bibliografiskt granskad
    2. Extending Equilibrium Markets
    Öppna denna publikation i ny flik eller fönster >>Extending Equilibrium Markets
    2001 Ingår i: IEEE Intelligent Systems, ISSN 1094-7167, Vol. 16, nr 4, 18-26 s.Artikel i tidskrift (Refereegranskat) Published
    Identifikatorer
    urn:nbn:se:uu:diva-92382 (URN)
    Tillgänglig från: 2004-11-22 Skapad: 2004-11-22Bibliografiskt granskad
    3. Communication Test of Electronic Power Markets through Power Line Communication
    Öppna denna publikation i ny flik eller fönster >>Communication Test of Electronic Power Markets through Power Line Communication
    Kapitel i bok, del av antologi (Övrigt vetenskapligt) Published
    Identifikatorer
    urn:nbn:se:uu:diva-92383 (URN)
    Tillgänglig från: 2004-11-22 Skapad: 2004-11-22Bibliografiskt granskad
    4. A Tractable Mechanism for Time Dependent Markets
    Öppna denna publikation i ny flik eller fönster >>A Tractable Mechanism for Time Dependent Markets
    Manuskript (Övrigt vetenskapligt)
    Identifikatorer
    urn:nbn:se:uu:diva-92384 (URN)
    Tillgänglig från: 2004-11-22 Skapad: 2004-11-22 Senast uppdaterad: 2010-01-13Bibliografiskt granskad
    5. A Flexible Model for Tree-Structured Multi-Commodity Markets
    Öppna denna publikation i ny flik eller fönster >>A Flexible Model for Tree-Structured Multi-Commodity Markets
    2007 (Engelska)Ingår i: Electronic Commerce Research, ISSN 1389-5753, E-ISSN 1572-9362, Vol. 7, nr 1, 69-88 s.Artikel i tidskrift (Refereegranskat) Published
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-92385 (URN)10.1007/s10660-006-0063-Y (DOI)
    Tillgänglig från: 2004-11-22 Skapad: 2004-11-22 Senast uppdaterad: 2011-02-28Bibliografiskt granskad
    6. Market Simulations
    Öppna denna publikation i ny flik eller fönster >>Market Simulations
    Manuskript (Övrigt vetenskapligt)
    Identifikatorer
    urn:nbn:se:uu:diva-92386 (URN)
    Tillgänglig från: 2004-11-22 Skapad: 2004-11-22 Senast uppdaterad: 2010-01-13Bibliografiskt granskad
  • 12.
    Edqvist, Samuel
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Scheduling Physicians using Constraint Programming2008Självständigt arbete på avancerad nivå (magisterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
  • 13.
    Elvander, Adam
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Developing a Recommender System for a Mobile E-commerce Application2015Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    This thesis describes the process of conceptualizing and developing a recommendersystem for a peer-to-peer commerce application. The application in question is calledPlick and is a vintage clothes marketplace where private persons and smaller vintageretailers buy and sell secondhand clothes from each other. Recommender systems is arelatively young field of research but has become more popular in recent years withthe advent of big data applications such as Netflix and Amazon. Examples ofrecommender systems being used in e-marketplace applications are however stillsparse and the main contribution of this thesis is insight into this sub-problem inrecommender system research. The three main families of recommender algorithmsare analyzed and two of them are deemed unfitting for the e-marketplace scenario.Out of the third family, collaborative filtering, three algorithms are described,implemented and tested on a large subset of data collected in Plick that consistsmainly of clicks made by users on items in the system. By using both traditional andnovel evaluation techniques it is further shown that a user-based collaborative filteringalgorithm yields the most accurate recommendations when compared to actual userbehavior. This represents a divergence from recommender systems commonly usedin e-commerce applications. The paper concludes with a discussion on the cause andsignificance of this difference and the impact of certain data-preprocessing techniqueson the results.

  • 14.
    Fomkin, Ruslan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Optimization and Execution of Complex Scientific Queries2009Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Large volumes of data produced and shared within scientific communities are analyzed by many researchers to investigate different scientific theories. Currently the analyses are implemented in traditional programming languages such as C++. This is inefficient for research productivity, since it is difficult to write, understand, and modify such programs. Furthermore, programs should scale over large data volumes and analysis complexity, which further complicates code development.

    This Thesis investigates the use of database technologies to implement scientific applications, in which data are complex objects describing measurements of independent events and the analyses are selections of events by applying conjunctions of complex numerical filters on each object separately. An example of such an application is analyses for the presence of Higgs bosons in collision events produced by the ATLAS experiment. For efficient implementation of such an ATLAS application, a new data stream management system SQISLE is developed. In SQISLE queries are specified over complex objects which are efficiently streamed from sources through the query engine. This streaming approach is compared with the conventional approach to load events into a database before querying. Since the queries implementing scientific analyses are large and complex, novel techniques are developed for efficient query processing. To obtain efficient plans for such queries SQISLE implements runtime query optimization strategies, which during query execution collect runtime statistics for a query, reoptimize the query using the collected statistics, and dynamically switch optimization strategies. The cost-based optimization utilizes a novel cost model for aggregate functions over nested subqueries. To alleviate estimation errors in large queries the fragments are decomposed into conjunctions of subqueries over which runtime statistics are measured. Performance is further improved by query transformation, view materialization, and partial evaluation. ATLAS queries in SQISLE using these query processing techniques perform close to or better than hard-coded C++ implementations of the same analyses.

    Scientific data are often stored in Grids, which manage both storage and computational resources. This Thesis includes a framework POQSEC that utilizes Grid resources to scale scientific queries over large data volumes by parallelizing the queries and shipping the data management system itself, e.g. SQISLE, to Grid computational nodes for the parallel query execution.

  • 15.
    Fransson, Moa
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Fåhraeus, Lisa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Finding Patterns in Vehicle Diagnostic Trouble Codes: A data mining study applying associative classification2015Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In Scania vehicles, Diagnostic Trouble Codes (DTCs) are collected while driving, later on loaded into a central database when visiting a workshop. These DTCs are statistically used to analyse vehicles’ health statuses, which is why correctness in data is desirable. In workshops DTCs can however occur due to work and tests. Nevertheless are they loaded into the database without any notification. In order to perform an accurate analysis of the vehicle health status it would be desirable if such DTCs could be found and removed. The thesis has examined if this is possible by searching for patterns in DTCs, indicating whether the DTCs are generated in a workshop or not. Due to its easy interpretable outcome an Associative Classification method was used with the aim of categorising data. The classifier was built applying well-known algorithms and then two classification algorithms were developed to fit the data structure when labelling new data. The final classifier performed with an accuracy above 80 percent where no distinctive differences between the two algorithms could be found. Hardly 50 percent of all workshop DTCs were however found. The conclusion is that either do patterns in workshop DTCs only occur in 50 percent of the cases, or the classifier can only detect 50 percent of them. The patterns found could confirm previous knowledge regarding workshop generated DTCs as well as provide Scania with new information. 

  • 16.
    Fredriksson, Susanne
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Auktorisation och ackreditering inom Försvarsmakten: En studie i nyttan av en standardiserad process för att hantera informationssäkerhet2011Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Information Technology is an essential part of the society today, not least in large organisations dealing with sensitive information. An example of such an organisation is the Swedish Armed Forces which indeed is in the need of ways to ensure information security in their Information Technology systems. The means which is used is an authorisation and accreditation process.

    All Information Technology systems go through a life cycle which includes realisation, usage, development and liquidation. In the Swedish Armed Forces the lifecycle is an authorisation process. Each step in the process is followed by an authorisation decision and one of these steps is accreditation. Accreditation is a formal approval to put the system in operation.

    The aim of the thesis is to study how large organisations may ensure information security when developing IT and the importance of a standardised process to handle information security. The study has been carried out by comparing the process of the Swedish Armed Forces with other methods to run projects and theories concerning development of IT.

    Interviews with Information Security Consultants at Combitech AB along with a study of documentation have been carried out in order to study the process. The theoretical framework has been formed through a literature study of project models and methods for development of IT.

    The main result of the thesis is that a standardised process which manage quality, communication, traceability, complexity, aim, operation and liquidation plan, risk assessment as well as use case increases the chance of a successful project and the achievement of information security in development of new IT. High quality in the formation of the security aim and the specification of requirements, along with tests to establish that all requirements are fulfilled, are crucial when it comes to information security.

  • 17.
    Gutkovas, Ramunas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Advancing concurrent system verification: Type based approach and tools2014Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Concurrent systems, i.e., systems of parallel processes, are nearly ubiquitous and verifying the correctness of such systems is becoming an important subject. Many formalisms were invented for such purpose, however, new types of systems are introduced and there is a need for handling larger systems. One examples is wireless sensor networks that are being deployed in increasing numbers in various areas; and in particular safety-critical areas, e.g., bush fire detection. Thus, ensuring their correctness is important.

    A process calculus is a formal language for modeling concurrent systems. The pi-calculus is a prominent example of such a language featuring message-passing concurrency. Psi-calculi is a parametric framework that extends the pi-calculus with arbitrary data and logics. Psi-calculi feature a universal theory with its results checked in an automated theorem prover ensuring their correctness.

    In this thesis, we extend psi-calculi expressiveness and modeling precision by introducing a sort system and generalised pattern matching. We show that the extended psi-calculi enjoy the same meta-theoretical results.

    We have developed the Pwb, a tool for the psi-calculi framework. The tool provides a high-level interactive symbolic execution and automated behavioral equivalence checking. We exemplify the use of the tool by developing a high-level executable model of a data collection protocol for wireless sensor networks.

    We are the first to introduce a session types based system for systems with unreliable communication. Remarkably, we do not need to add specific extensions to the types to accommodate such systems. We prove the standard desirable properties for type systems hold also for our type system.

    Delarbeten
    1. The Psi-Calculi Workbench: A Generic Tool for Applied Process Calculi
    Öppna denna publikation i ny flik eller fönster >>The Psi-Calculi Workbench: A Generic Tool for Applied Process Calculi
    2015 (Engelska)Ingår i: ACM Transactions on Embedded Computing Systems, ISSN 1539-9087, E-ISSN 1558-3465, Vol. 14, nr 1, 9Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Psi-calculi is a parametric framework for extensions of the pi-calculus with arbitrary data and logic. All instances of the framework inherit machine-checked proofs of the metatheory such as compositionality and bisimulation congruence. We present a generic analysis tool for psi-calculus instances, enabling symbolic execution and (bi)simulation checking for both unicast and broadcast communication. The tool also provides a library for implementing new psi-calculus instances. We provide examples from traditional communication protocols and wireless sensor networks. We also describe the theoretical foundations of the tool, including an improved symbolic operational semantics, with additional support for scoped broadcast communication.

    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap
    Identifikatorer
    urn:nbn:se:uu:diva-233750 (URN)10.1145/2682570 (DOI)000349302200009 ()
    Projekt
    ProFuNUPMARC
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), RIT08­0065
    Tillgänglig från: 2015-01-21 Skapad: 2014-10-09 Senast uppdaterad: 2016-08-16Bibliografiskt granskad
    2. Session types for broadcasting
    Öppna denna publikation i ny flik eller fönster >>Session types for broadcasting
    2014 (Engelska)Ingår i: Proc. 7th Workshop on Programming Language Approaches to Concurrency and Communication-cEntric Software, 2014, 25-31 s.Konferensbidrag (Refereegranskat)
    Serie
    Electronic Proceedings in Theoretical Computer Science, 155
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-230882 (URN)10.4204/EPTCS.155.4 (DOI)
    Konferens
    PLACES 2014, April 12, Grenoble, France
    Projekt
    ProFuN
    Tillgänglig från: 2014-06-13 Skapad: 2014-09-01 Senast uppdaterad: 2014-10-27Bibliografiskt granskad
    3. A Sorted Semantic Framework for Applied Process Calculi (extended abstract)
    Öppna denna publikation i ny flik eller fönster >>A Sorted Semantic Framework for Applied Process Calculi (extended abstract)
    Visa övriga...
    2014 (Engelska)Ingår i: Trustworthy Global Computing: TGC 2013, Springer Berlin/Heidelberg, 2014, 103-118 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Applied process calculi include advanced programming constructs such as type systems, communication with pattern matching, encryption primitives, concurrent constraints, nondeterminism, process creation, and dynamic connection topologies. Several such formalisms, e.g. the applied pi calculus,  are extensions of the the pi-calculus; a growing number is geared towards particular applications or computational paradigms.

    Our goal is a unified framework to represent different process calculi and notions of computation. To this end, we extend our previous work on psi-calculi with novel abstract patterns and pattern matching, and add sorts to the data term language, giving sufficient criteria for subject reduction to hold. Our framework can accommodate several existing process calculi; the resulting transition systems are isomorphic to the originals up to strong bisimulation. We also demonstrate  different notions of computation on data terms, including cryptographic primitives and a lambda-calculus with erratic choice. Substantial parts of the meta-theory of sorted psi-calculi have been machine-checked using Nominal Isabelle.

    Ort, förlag, år, upplaga, sidor
    Springer Berlin/Heidelberg, 2014
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 8358
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-203382 (URN)10.1007/978-3-319-05119-2_7 (DOI)978-3-319-05118-5 (ISBN)
    Konferens
    8th International Symposium on Trustworthy Global Computing, August 30-31, 2013, Buenos Aires, Argentina
    Projekt
    UPMARCProFuN
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), RIT08­0065
    Tillgänglig från: 2014-03-08 Skapad: 2013-07-09 Senast uppdaterad: 2016-02-25
  • 18.
    Gutkovas, Ramūnas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Languages, Logics, Types and Tools for Concurrent System Modelling2016Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    A concurrent system is a computer system with components that run in parallel and interact with each other. Such systems are ubiquitous and are notably responsible for supporting the infrastructure for transport, commerce and entertainment. They are very difficult to design and implement correctly: many different modeling languages and verification techniques have been devised to reason about them and verifying their correctness. However, existing languages and techniques can only express a limited range of systems and properties.

    In this dissertation, we address some of the shortcomings of established models and theories in four ways: by introducing a general modal logic, extending a modelling language with types and a more general operation, providing an automated tool support, and adapting an established behavioural type theory to specify and verify systems with unreliable communication.

    A modal logic for transition systems is a way of specifying properties of concurrent system abstractly. We have developed a modal logic for nominal transition systems. Such systems are common and include the pi-calculus and psi-calculi. The logic is adequate for many process calculi with regard to their behavioural equivalence even for those that no logic has been considered, for example, CCS, the pi-calculus, psi-calculi, the spi-calculus, and the fusion calculus.

    The psi-calculi framework is a parametric process calculi framework that subsumes many existing process calculi. We extend psi-calculi with a type system, called sorts, and a more general notion of pattern matching in an input process. This gives additional expressive power allowing us to capture directly even more process calculi than was previously possible. We have reestablished the main results of psi-calculi to show that the extensions are consistent.

    We have developed a tool that is based on the psi-calculi, called the psi-calculi workbench. It provides automation for executing the psi-calculi processes and generating a witness for a behavioural equivalence between processes. The tool can be used both as a library and as an interactive application.

    Lastly, we developed a process calculus for unreliable broadcast systems and equipped it with a binary session type system. The process calculus captures the operations of scatter and gather in wireless sensor and ad-hoc networks. The type system enjoys the usual property of subject reduction, meaning that well-typed processes reduce to well-typed processes. To cope with unreliability, we also introduce a notion of process recovery that does not involve communication. This is the first session type system for a model with unreliable communication.

    Delarbeten
    1. Modal Logics for Nominal Transition Systems
    Öppna denna publikation i ny flik eller fönster >>Modal Logics for Nominal Transition Systems
    Visa övriga...
    2016 (Engelska)Ingår i: Archive of Formal Proofs, ISSN 2150-914xArtikel i tidskrift (Refereegranskat) Published
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-300027 (URN)
    Tillgänglig från: 2016-10-25 Skapad: 2016-08-01 Senast uppdaterad: 2016-11-18Bibliografiskt granskad
    2. A Sorted Semantic Framework for Applied Process Calculi
    Öppna denna publikation i ny flik eller fönster >>A Sorted Semantic Framework for Applied Process Calculi
    Visa övriga...
    2016 (Engelska)Ingår i: Logical Methods in Computer Science, ISSN 1860-5974, E-ISSN 1860-5974, Vol. 12, nr 1, 8Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Applied process calculi include advanced programming constructs such as type systems, communication with pattern matching, encryption primitives, concurrent constraints, nondeterminism, process creation, and dynamic connection topologies. Several such formalisms, e.g. the applied pi calculus, are extensions of the the pi-calculus; a growing number is geared towards particular applications or computational paradigms.

    Our goal is a unified framework to represent different process calculi and notions of computation. To this end, we extend our previous work on psi-calculi with novel abstract patterns and pattern matching, and add sorts to the data term language, giving sufficient criteria for subject reduction to hold. Our framework can directly represent several existing process calculi; the resulting transition systems are isomorphic to the originals up to strong bisimulation. We also demonstrate different notions of computation on data terms, including cryptographic primitives and a lambda-calculus with erratic choice. Finally, we prove standard congruence and structural properties of bisimulation; the proof has been machine-checked using Nominal Isabelle in the case of a single name sort.

    Nyckelord
    Expressiveness; Pattern matching; Type systems; Theorem proving; pi-calculus; Nominal sets
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap
    Identifikatorer
    urn:nbn:se:uu:diva-262199 (URN)10.2168/LMCS-12(1:8)2016 (DOI)000374769600004 ()
    Tillgänglig från: 2016-03-31 Skapad: 2015-09-09 Senast uppdaterad: 2016-08-26Bibliografiskt granskad
    3. A Session Type System for Unreliable Broadcast Communication
    Öppna denna publikation i ny flik eller fönster >>A Session Type System for Unreliable Broadcast Communication
    (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

     Session types are formal specifications of communication protocols, allowing protocol implementations to be verified by typechecking. Up to now, session type disciplines have assumed that the communication medium is reliable, with no loss of messages. However, unreliable broadcast communication is common in a wide class of distributed systems such as ad-hoc and wireless sensor networks. Often such systems have structured communication patterns that should be amenable to analysis by means of session types. We introduce a process calculus with unreliable broadcast communication, and equip it with a session type system that we show is sound. We capture two common operations, scatter and gather, inhabiting dual session types. To cope with unreliability in a session we introduce an autonomous recovery mechanism that does not require acknowledgements from session participants. Our session type formalisation is the first to consider unreliable communication.

    Nyckelord
    session type, process calculus, broadcast, unreliable
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-300028 (URN)
    Tillgänglig från: 2016-08-01 Skapad: 2016-08-01 Senast uppdaterad: 2016-08-16Bibliografiskt granskad
    4. The Psi-Calculi Workbench: A Generic Tool for Applied Process Calculi
    Öppna denna publikation i ny flik eller fönster >>The Psi-Calculi Workbench: A Generic Tool for Applied Process Calculi
    2015 (Engelska)Ingår i: ACM Transactions on Embedded Computing Systems, ISSN 1539-9087, E-ISSN 1558-3465, Vol. 14, nr 1, 9Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Psi-calculi is a parametric framework for extensions of the pi-calculus with arbitrary data and logic. All instances of the framework inherit machine-checked proofs of the metatheory such as compositionality and bisimulation congruence. We present a generic analysis tool for psi-calculus instances, enabling symbolic execution and (bi)simulation checking for both unicast and broadcast communication. The tool also provides a library for implementing new psi-calculus instances. We provide examples from traditional communication protocols and wireless sensor networks. We also describe the theoretical foundations of the tool, including an improved symbolic operational semantics, with additional support for scoped broadcast communication.

    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap
    Identifikatorer
    urn:nbn:se:uu:diva-233750 (URN)10.1145/2682570 (DOI)000349302200009 ()
    Projekt
    ProFuNUPMARC
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), RIT08­0065
    Tillgänglig från: 2015-01-21 Skapad: 2014-10-09 Senast uppdaterad: 2016-08-16Bibliografiskt granskad
  • 19.
    Hassani Bijarbooneh, Farshid
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Constraint Programming for Wireless Sensor Networks2015Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    In recent years, wireless sensor networks (WSNs) have grown rapidly and have had a substantial impact in many applications. A WSN is a network that consists of interconnected autonomous nodes that monitor physical and environmental conditions, such as temperature, humidity, pollution, etc. If required, nodes in a WSN can perform actions to affect the environment.

    WSNs present an interesting and challenging field of research due to the distributed nature of the network and the limited resources of the nodes. It is necessary for a node in a WSN to be small to enable easy deployment in an environment and consume as little energy as possible to prolong its battery lifetime. There are many challenges in WSNs, such as programming a large number of nodes, designing communication protocols, achieving energy efficiency, respecting limited bandwidth, and operating with limited memory. WSNs are further constrained due to the deployment of the nodes in indoor and outdoor environments and obstacles in the environment.

    In this dissertation, we study some of the fundamental optimisation problems related to the programming, coverage, mobility, data collection, and data loss of WSNs, modelled as standalone optimisation problems or as optimisation problems integrated with protocol design. Our proposed solution methods come from various fields of research including constraint programming, integer linear programming, heuristic-based algorithms, and data inference techniques.

    Delarbeten
    1. Energy-efficient task mapping for data-driven sensor network macroprogramming using constraint programming
    Öppna denna publikation i ny flik eller fönster >>Energy-efficient task mapping for data-driven sensor network macroprogramming using constraint programming
    2011 (Engelska)Ingår i: Operations Research, Computing, and Homeland Defense, Hanover, MD: Institute for Operations Research and the Management Sciences , 2011, 199-209 s.Konferensbidrag (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    Hanover, MD: Institute for Operations Research and the Management Sciences, 2011
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-136365 (URN)10.1287/ics.2011.0016 (DOI)978-0-9843378-1-1 (ISBN)
    Konferens
    12th INFORMS Computing Society Conference
    Projekt
    ProFuN
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), RIT08-0065
    Tillgänglig från: 2011-01-11 Skapad: 2010-12-12 Senast uppdaterad: 2015-03-09Bibliografiskt granskad
    2. An optimisation-based approach for wireless sensor deployment in mobile sensing environments
    Öppna denna publikation i ny flik eller fönster >>An optimisation-based approach for wireless sensor deployment in mobile sensing environments
    2012 (Engelska)Ingår i: Proc. Wireless Communications and Networking Conference 2012, IEEE Communications Society, 2012, 2108-2112 s.Konferensbidrag (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    IEEE Communications Society, 2012
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-171536 (URN)10.1109/WCNC.2012.6214140 (DOI)000324580702038 ()978-1-4673-0436-8 (ISBN)
    Konferens
    WCNC 2012
    Projekt
    ProFuN
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), RIT08-0065
    Tillgänglig från: 2012-06-11 Skapad: 2012-03-20 Senast uppdaterad: 2015-03-09Bibliografiskt granskad
    3. Optimising quality of information in data collection for mobile sensor networks
    Öppna denna publikation i ny flik eller fönster >>Optimising quality of information in data collection for mobile sensor networks
    2013 (Engelska)Ingår i: Proc. 21st International Symposium on Quality of Service, IEEE Communications Society, 2013, 163-172 s.Konferensbidrag (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    IEEE Communications Society, 2013
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-208996 (URN)10.1109/IWQoS.2013.6550277 (DOI)000325614100019 ()978-1-4799-0589-8 (ISBN)
    Konferens
    IEEE/ACM 21st International Symposium on Quality of Service (IWQoS), 3-4 June, 2013, Montreal, QC
    Projekt
    ProFuN
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), RIT08-0065
    Tillgänglig från: 2013-10-13 Skapad: 2013-10-13 Senast uppdaterad: 2015-03-09Bibliografiskt granskad
    4. A constraint programming approach for managing end-to-end requirements in sensor network macroprogramming
    Öppna denna publikation i ny flik eller fönster >>A constraint programming approach for managing end-to-end requirements in sensor network macroprogramming
    Visa övriga...
    2014 (Engelska)Ingår i: Proc. 3rd International Conference on Sensor Networks / [ed] Postolache, Octavian; van Sinderen, Marten; Ali, Falah; Benavente-Peces, César, Setúbal, Portugal: SciTePress, 2014, 28-40 s.Konferensbidrag (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    Setúbal, Portugal: SciTePress, 2014
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-210431 (URN)10.5220/0004715200280040 (DOI)978-989-758-001-7 (ISBN)
    Konferens
    SENSORNETS 2014
    Projekt
    ProFuN
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), RIT08-0065
    Tillgänglig från: 2014-01-09 Skapad: 2013-11-08 Senast uppdaterad: 2015-03-09Bibliografiskt granskad
    5. Energy-efficient sensor selection for data quality and load balancing in wireless sensor networks
    Öppna denna publikation i ny flik eller fönster >>Energy-efficient sensor selection for data quality and load balancing in wireless sensor networks
    2014 (Engelska)Ingår i: Proc. 22nd International Symposium on Quality of Service, IEEE Communications Society, 2014, 338-343 s.Konferensbidrag (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    IEEE Communications Society, 2014
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-229594 (URN)10.1109/IWQoS.2014.6914338 (DOI)000355927000044 ()978-1-4799-4852-9 (ISBN)
    Konferens
    IWQoS 2014, May 26–27, Hong Kong, China
    Projekt
    ProFuN
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), RIT08-0065
    Tillgänglig från: 2014-05-27 Skapad: 2014-08-11 Senast uppdaterad: 2015-08-26Bibliografiskt granskad
    6. Cloud-Assisted Data Fusion and Sensor Selection for Internet-of-Things
    Öppna denna publikation i ny flik eller fönster >>Cloud-Assisted Data Fusion and Sensor Selection for Internet-of-Things
    (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    The Internet of Things (IoT) is connecting people and smart devices on a scale that once was unimaginable. One major challenge for the IoT is to handle vast amount of sensing data generated from the smart devices that are resource-limited and subject to missing data due to link or node failures. By exploring cloud computing with the IoT, we present a cloud-based solution that takes into account the link quality and spatio-temporal correlation of data to minimise energy consumption by selecting sensors for sampling and relaying data. We propose a multi-phase adaptive sensing algorithm with belief propagation protocol (ASBP), which can provide high data quality and reduce energy consumption by turning on only a small number of nodes in the network. We formulate the sensor selection problem and solve it using constraint programming (CP) and greedy search. We then use our message passing algorithm (belief propagation) for performing inference to reconstruct the missing sensing data. ASBP is evaluated based on the data collected from real sensors. The results show that while maintaining a satisfactory level of data quality and prediction accuracy, ASBP can provide load balancing among sensors successfully and preserves 80\% more energy compared with the case where all sensor nodes are actively involved.

    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap
    Identifikatorer
    urn:nbn:se:uu:diva-241377 (URN)
    Projekt
    ProFuN
    Tillgänglig från: 2015-01-12 Skapad: 2015-01-12 Senast uppdaterad: 2015-03-09
  • 20.
    He, Jun
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Constraints for Membership in Formal Languages under Systematic Search and Stochastic Local Search2013Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    This thesis focuses on constraints for membership in formal languages under both the systematic search and stochastic local search approaches to constraint programming (CP). Such constraints are very useful in CP for the following three reasons: They provide a powerful tool for user-level extensibility of CP languages. They are very useful for modelling complex work shift regulation constraints, which exist in many shift scheduling problems. In the analysis, testing, and verification of string-manipulating programs, string constraints often arise. We show in this thesis that CP solvers with constraints for membership in formal languages are much more suitable than existing solvers used in tools that have to solve string constraints. In the stochastic local search approach to CP, we make the following two contributions: We introduce a stochastic method of maintaining violations for the regular constraint and extend our method to the automaton constraint with counters. To improve the usage of constraints for which there exists no known constant-time algorithm for neighbour evaluation, we introduce a framework of using solution neighbourhoods, and give an efficient algorithm of constructing a solution neighbourhood for the regular constraint. In the systematic search approach to CP, we make the following two contributions: We show that there may be unwanted consequences when using a propagator that may underestimate a cost of a soft constraint, as the propagator may guide the search to incorrect (non-optimum) solutions to an over-constrained problem. We introduce and compare several propagators that compute correctly the cost of the edit-distance based soft-regular constraint. We show that the context-free grammar constraint is useful and introduce an improved propagator for it.

    Delarbeten
    1. An automaton constraint for local search
    Öppna denna publikation i ny flik eller fönster >>An automaton constraint for local search
    2011 (Engelska)Ingår i: Fundamenta Informaticae, ISSN 0169-2968, E-ISSN 1875-8681, Vol. 107, 223-248 s.Artikel i tidskrift (Refereegranskat) Published
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-151745 (URN)10.3233/FI-2011-401 (DOI)000294729500006 ()
    Tillgänglig från: 2011-04-17 Skapad: 2011-04-17 Senast uppdaterad: 2015-12-10Bibliografiskt granskad
    2. Solution neighbourhoods for constraint-directed local search
    Öppna denna publikation i ny flik eller fönster >>Solution neighbourhoods for constraint-directed local search
    2012 (Engelska)Ingår i: Proc. 27th ACM Symposium on Applied Computing, New York: ACM Press, 2012, 74-79 s.Konferensbidrag, Presentation (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    New York: ACM Press, 2012
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-186863 (URN)10.1145/2245276.2245294 (DOI)978-1-4503-0857-1 (ISBN)
    Konferens
    SAC 2012, March 26-30, Riva del Garda, Italy
    Tillgänglig från: 2012-03-28 Skapad: 2012-11-29 Senast uppdaterad: 2013-04-02Bibliografiskt granskad
    3. Underestimating the cost of a soft constraint is dangerous: Revisiting the edit-distance based soft regular constraint
    Öppna denna publikation i ny flik eller fönster >>Underestimating the cost of a soft constraint is dangerous: Revisiting the edit-distance based soft regular constraint
    2013 (Engelska)Ingår i: Journal of Heuristics, ISSN 1381-1231, E-ISSN 1572-9397, Vol. 19, nr 5, 729-756 s.Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Many real-life problems are over-constrained, so that no solution satisfying all their constraints exists.  Soft constraints, with costs denoting how much the constraints are violated, are used to solve these problems.  We use the edit-distance based soft regular constraint as an example to show that a propagation algorithm that sometimes underestimates the cost may guide the search to incorrect (non-optimal) solutions to an over-constrained problem.  To compute correctly the cost for the edit-distance based soft regular constraint, we present a quadratic-time propagation algorithm based on dynamic programming and a proof of its correctness.  We also give an improved propagation algorithm using an idea of computing the edit distance between two strings, which may also be applied to other constraints with propagators based on dynamic programming.  The asymptotic time complexity of our improved propagator is always at least as good as the one of our quadratic-time propagator, but significantly better when the edit distance is small.  Our propagators achieve domain consistency on the problem variables and bounds consistency on the cost variable.  Our method can also be adapted for the violation measure of the edit-distance based regular constraint for constraint-based local search.

    Nyckelord
    constraint programming, soft regular constraint, edit distance, network flows, dynamic programming
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap
    Identifikatorer
    urn:nbn:se:uu:diva-195699 (URN)10.1007/s10732-013-9222-1 (DOI)000325021400001 ()
    Forskningsfinansiär
    Vetenskapsrådet, 2007-6445Vetenskapsrådet, 2011-6133
    Tillgänglig från: 2013-02-26 Skapad: 2013-02-26 Senast uppdaterad: 2013-10-28Bibliografiskt granskad
    4. Solving string constraints: The case for constraint programming
    Öppna denna publikation i ny flik eller fönster >>Solving string constraints: The case for constraint programming
    2013 (Engelska)Ingår i: Principles and Practice of Constraint Programming: CP 2013, Springer Berlin/Heidelberg, 2013, 381-397 s.Konferensbidrag (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    Springer Berlin/Heidelberg, 2013
    Serie
    Lecture Notes in Computer Science, 8124
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-195702 (URN)10.1007/978-3-642-40627-0_31 (DOI)000329244000031 ()978-3-642-40626-3 (ISBN)
    Konferens
    19th International Conference on Principles and Practice of Constraint Programming, Uppsala, Sweden, September 16-20, 2013
    Forskningsfinansiär
    Vetenskapsrådet, 2007-6445Vetenskapsrådet, 2011-6133
    Tillgänglig från: 2013-09-18 Skapad: 2013-02-26 Senast uppdaterad: 2014-02-06Bibliografiskt granskad
  • 21.
    Henriksson, David
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Innovation in software development: Lean and agiles effects on slack time and innovation in a large enterprise2016Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    The software industry is growing and companies are forced to respond to the increased competition by innovating while at the same time making their organizations as efficient as possible. Many companies are adapting Lean and Agile software development to satisfy the customer and deliver faster. Previous research shows that it's hard to balance innovation and efficiency and also that while slack time is important for innovation it sometimes is eliminated during waste elimination in the Lean philosophy.

    The aim of this research is to explore how Lean and Agile software development affects slack time and innovation within a large organization. To achieve this this thesis has studied how the implementation of Lean and Agile software development within a subsidiary of a large global technology company, has affected innovation. This study has been done using qualitative semi-structured interviews.

    This research shows that a clash between Lean and Agile structures with more traditional ones may cause lack of slack time due to resource optimization and takes away from time that could be spent on innovation pursuits. It also appears that Lean and Agile software development implemented in a large company potentially favors process innovations over product innovations and incremental innovation over radical innovation.

    A possible means to aid both the ideation and productification phases of innovation within a software development organization could be to have true cross-functional teams where developers have more contact with product management and customers.

  • 22.
    Ivanova, Milena
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Scalable Scientific Stream Query Processing2005Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Scientific applications require processing of high-volume on-line streams of numerical data from instruments and simulations. In order to extract information and detect interesting patterns in these streams scientists need to perform on-line analyses including advanced and often expensive numerical computations. We present an extensible data stream management system, GSDM (Grid Stream Data Manager) that supports scalable and flexible continuous queries (CQs) on such streams. Application dependent streams and query functions are defined through an object-relational model.

    Distributed execution plans for continuous queries are specified as high-level data flow distribution templates. A built-in template library provides several common distribution patterns from which complex distribution patterns are constructed. Using a generic template we define two customizable partitioning strategies for scalable parallel execution of expensive stream queries: window split and window distribute. Window split provides parallel execution of expensive query functions by reducing the size of stream data units using application dependent functions as parameters. By contrast, window distribute provides customized distribution of entire data units without reducing their size. We evaluate these strategies for a typical high volume scientific stream application and show that window split is favorable when expensive queries are executed on limited resources, while window distribution is better otherwise. Profile-based optimization automatically generates optimized plans for a class of expensive query functions. We further investigate requirements for GSDM in Grid environments.

    GSDM is a fully functional system for parallel processing of continuous stream queries. GSDM includes components such as a continuous query engine based on a data-driven data flow paradigm, a compiler of CQ specifications into distributed execution plans, stream interfaces and communication primitives. Our experiments with real scientific streams on a shared-nothing architecture show the importance of both efficient processing and communication for efficient and scalable distributed stream processing.

  • 23.
    Johnsson, Matilda
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Knowledge sharing in a competence intensive company2015Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    The overall purpose of this study is to investigate a competence intensive company, evaluating their knowledge sharing. By identifying barriers and in what way IT can have a positive impact on the knowledge sharing, the study investigates how knowledge can be shared more efficiently within the company.

    A thorough case study was conducted and furthermore analysed with respect to a model proposed by the author. The model consist of three dimensions of knowledge creation, the “how, where and what”, together with identified barriers: individual, organisational and technological. Some of the identified barriers are lack of time, insufficient captures and evaluations of past mistakes, lack of supporting organisational structure, rewards and recognition. IT is vital for the everyday work and with improved tools time can be set free. To share knowledge more efficiently the management should mediate the value of knowledge sharing as well as implement knowledge sharing initiatives. There should also be expressed rewards for sharing knowledge motivating the employees to take their time and share knowledge. Another approach, focusing on the employees’ socialisation rather than documentation, should be advocated and more time needs to be set free. Upgrading the IT tools and harvesting lessons learnt from project could accordingly enable a more effective knowledge sharing for the company.

  • 24.
    Jonsson, Daniel
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Betal- och biljettsystem i kollektivtrafik: en kartläggning av aktörer och roller2010Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    An Interoperable Fare Management system for public transport has been a struggle for the actors in industry of public transport. A lot of effort has been put into creating a smart card solution that enables the travellers to use the same card for paying and holding the ticket regardless geographical position in Sweden. That project has failed, mostly due to a lack of taking non-technical actors and aspects into consideration during the project process. This thesis highlights the major actors and their responsibilities. The ARKTRANS framework is used as analytical tool to identify different roles. Throughout the thesis a sociotechnical perspective is used. The developed method in this thesis is suitable for mapping of actors and responsibilities in a network surrounding a complex technical system. The method is especially suitable for analysing the on-going project in middle Sweden developing and implementing a new fare management system for public transport. The study has identified a lack of actors in the role as Fare Management Interoperability Provider. This absence is particularly interesting since the a collaboration in southern Sweden, which has a Fare Management Interoperability Provider, has succeeded in the struggle towards interoperability.

     

  • 25.
    Karlsson, Maria
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Market based programming and resource allocation2003Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The subject of this thesis is the concept of market-oriented programming and market protocols. We want to solve an allocation problem where some resources are to be divided among a number of agents. Each agent has a utility function telling how much the current allocation is worth for it. The goal is to allocate the resources among the agents in a way that maximizes the sum of the utilities of all agents. To solve this problem we use the concept of markets to create mechanisms for computational implementation.

    To achieve the advantages of market-oriented programming, we have to consider the conceptual view of the problem a main design issue. We want to investigate the possibilities to build computationally effective mechanisms which maintain the intuitive, easy-to-understand structure of market-based approaches. In the first paper we look at two examples from the literature and show that conceptual improvements of the approaches will make agent behavior more realistic. This will also make the examples fit into a more general theory. In the second paper we create a market mechanism for handling combinatorial markets. The mechanism includes an auction, where each iteration runs in polynomial time. The mechanism shows good performance when the number of resources is relatively small compared to the number of agents.

  • 26.
    Katchaounov, Timour
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Query Processing for Peer Mediator Databases2003Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The ability to physically interconnect many distributed, autonomous and heterogeneous software systems on a large scale presents new opportunities for sharing and reuse of existing, and for the creataion of new information and new computational services. However, finding and combining information in many such systems is a challenge even for the most advanced computer users. To address this challenge, mediator systems logically integrate many sources to hide their heterogeneity and distribution and give the users the illusion of a single coherent system.

    Many new areas, such as scientific collaboration, require cooperation between many autonomous groups willing to share their knowledge. These areas require that the data integration process can be distributed among many autonomous parties, so that large integration solutions can be constructed from smaller ones. For this we propose a decentralized mediation architecture, peer mediator systems (PMS), based on the peer-to-peer (P2P) paradigm. In a PMS, reuse of human effort is achieved through logical composability of the mediators in terms of other mediators and sources by defining mediator views in terms of views in other mediators and sources.

    Our thesis is that logical composability in a P2P mediation architecture is an important requirement and that composable mediators can be implemented efficiently through query processing techniques.

    In order to compute answers of queries in a PMS, logical mediator compositions must be translated to query execution plans, where mediators and sources cooperate to compute query answers. The focus of this dissertation is on query processing methods to realize composability in a PMS architecture in an efficient way that scales over the number of mediators.

    Our contributions consist of an investigation of the interfaces and capabilities for peer mediators, and the design, implementation and experimental study of several query processing techniques that realize composability in an efficient and scalable way.

    Delarbeten
    1. Functional data integration in a distributed mediator system
    Öppna denna publikation i ny flik eller fönster >>Functional data integration in a distributed mediator system
    2004 Ingår i: The Functional Approach to Data Management: Modeling, Analyzing and Integrating Heterogeneous Data, 2004, 483- s.Kapitel i bok, del av antologi (Övrigt vetenskapligt) Published
    Identifikatorer
    urn:nbn:se:uu:diva-90964 (URN)3-540-00375-4 (ISBN)
    Tillgänglig från: 2003-10-17 Skapad: 2003-10-17Bibliografiskt granskad
    2. Interface capabilities for query processing in peer mediator systems
    Öppna denna publikation i ny flik eller fönster >>Interface capabilities for query processing in peer mediator systems
    Manuskript (Övrigt vetenskapligt)
    Identifikatorer
    urn:nbn:se:uu:diva-90965 (URN)
    Tillgänglig från: 2003-10-17 Skapad: 2003-10-17 Senast uppdaterad: 2010-01-13Bibliografiskt granskad
    3. Scalable view expansion in a peer mediator system
    Öppna denna publikation i ny flik eller fönster >>Scalable view expansion in a peer mediator system
    2003 Ingår i: Eighth International Conference on Database Systems for Advanced Application, (DASFAA'03), 107-116 s.Artikel i tidskrift (Refereegranskat) Published
    Identifikatorer
    urn:nbn:se:uu:diva-90966 (URN)
    Tillgänglig från: 2003-10-17 Skapad: 2003-10-17Bibliografiskt granskad
    4. Optimizing queries in distributed and composable mediators
    Öppna denna publikation i ny flik eller fönster >>Optimizing queries in distributed and composable mediators
    1999 Ingår i: Proceedings of the Fourth IFCIS International Conference on Cooperative Information Systems, CoopIS'99, 291-302 s.Artikel i tidskrift (Refereegranskat) Published
    Identifikatorer
    urn:nbn:se:uu:diva-90967 (URN)
    Tillgänglig från: 2003-10-17 Skapad: 2003-10-17Bibliografiskt granskad
    5. Evaluation of join strategies for distributed mediation
    Öppna denna publikation i ny flik eller fönster >>Evaluation of join strategies for distributed mediation
    2001 Ingår i: Lecture Notes in Computer Science, Vol. 2151, 308-322 s.Artikel i tidskrift (Refereegranskat) Published
    Identifikatorer
    urn:nbn:se:uu:diva-90968 (URN)
    Tillgänglig från: 2003-10-17 Skapad: 2003-10-17Bibliografiskt granskad
    6. Object-oriented mediator queries to internet search engines
    Öppna denna publikation i ny flik eller fönster >>Object-oriented mediator queries to internet search engines
    2002 Ingår i: Lecture Notes in Computer Science, Vol. 2426, 176-186 s.Artikel i tidskrift (Refereegranskat) Published
    Identifikatorer
    urn:nbn:se:uu:diva-90969 (URN)
    Tillgänglig från: 2003-10-17 Skapad: 2003-10-17Bibliografiskt granskad
  • 27.
    Larmuseau, Adriaan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Protecting Functional Programs From Low-Level Attackers2016Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Software systems are growing ever larger. Early software systems were singular units developed by small teams of programmers writing in the same programming language. Modern software systems, on the other hand, consist of numerous interoperating components written by different teams and in different programming languages. While this more modular and diversified approach to software development has enabled us to build ever larger and more complex software systems, it has, however, made it harder to ensure the reliability and security of software systems.

    In this thesis we study and remedy the security flaws that arise when attempting to resolve the difference in abstractions between components written in high-level functional programming languages and components written in imperative low-level programming languages. High-level functional programming languages, treat computation as the evaluation of mathematical functions. Low-level imperative programming languages, on the contrary, provide programmers with features that enable them to directly interact with the underlying hardware. While these features help programmers write more efficient software, they also make it easy to write malware through techniques such as buffer overflows and return oriented programming.

    Concretely, we develop new run-time approaches for protecting components written in functional programming languages from malicious components written in low-level programming languages by making using of an emerging memory isolation mechanism.This memory isolation mechanism is called the Protected Module Architecture (PMA). Informally, PMA isolates the code and data that reside within a certain area of memory by restricting access to that area based on the location of the program counter.

    We develop these run-time protection techniques that make use of PMA for three important areas where components written in functional programming languages are threatened by malicious low-level components: foreign function interfaces, abstract machines and compilation. In everyone of these three areas, we formally prove that our run-time protection techniques are indeed secure. In addtion to that we also provide implementations of our ideas through a fully functional compiler and a well-performing abstract machine.

  • 28.
    Magnusson, Jonathan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Social Network Analysis Utilizing Big Data Technology2012Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    As of late there has been an immense increase of data within modern society. This is evident within the field of telecommunications. The amount of mobile data is growing fast. For a telecommunication operator, this provides means of getting more information of specific subscribers. The applications of this are many, such as segmentation for marketing purposes or detection of churners, people about to switching operator. Thus the analysis and information extraction is of great value. An approach of this analysis is that of social network analysis. Utilizing such methods yields ways of finding the importance of each individual subscriber in the network.

    This thesis aims at investigating the usefulness of social network analysis in telecommunication networks. As these networks can be very large the methods used to study them must scale linearly when the network size increases. Thus, an integral part of the study is to determine which social network analysis algorithms that have this scalability. Moreover, comparisons of software solutions are performed to find product suitable for these specific tasks.

    Another important part of using social network analysis is to be able to interpret the results. This can be cumbersome without expert knowledge. For that reason, a complete process flow for finding influential subscribers in a telecommunication network has been developed. The flow uses input easily available to the telecommunication operator. In addition to using social network analysis, machine learning is employed to uncover what behavior is associated with influence and pinpointing subscribers behaving accordingly.

  • 29.
    Melander, Lars
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Integrating Visual Data Flow Programming with Data Stream Management2016Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Data stream management and data flow programming have many things in common. In both cases one wants to transfer possibly infinite sequences of data items from one place to another, while performing transformations to the data. This Thesis focuses on the integration of a visual programming language with a data stream management system (DSMS) to support the construction, configuration, and visualization of data stream applications. In the approach, analyses of data streams are expressed as continuous queries (CQs) that emit data in real-time. The LabVIEW visual programming platform has been adapted to support easy specification of continuous visualization of CQ results. LabVIEW has been integrated with the DSMS SVALI through a stream-oriented client-server API. Query programming is declarative, and it is desirable to make the stream visualization declarative as well, in order to raise the abstraction level and make programming more intuitive. This has been achieved by adding a set of visual data flow components (VDFCs) to LabVIEW, based on the LabVIEW actor framework. With actor-based data flows, visualization of data stream output becomes more manageable, avoiding the procedural control structures used in conventional LabVIEW programming while still utilizing the comprehensive, built-in LabVIEW visualization tools.

    The VDFCs are part of the Visual Data stream Monitor (VisDM), which is a client-server based platform for handling real-time data stream applications and visualizing stream output. VDFCs are based on a data flow framework that is constructed from the actor framework, and are divided into producers, operators, consumers, and controls. They allow a user to set up the interface environment, customize the visualization, and convert the streaming data to a format suitable for visualization.

    Furthermore, it is shown how LabVIEW can be used to graphically define interfaces to data streams and dynamically load them in SVALI through a general wrapper handler. As an illustration, an interface has been defined in LabVIEW for accessing data streams from a digital 3D antenna.

    VisDM has successfully been tested in two real-world applications, one at Sandvik Coromant and one at the Ångström Laboratory, Uppsala University. For the first case, VisDM was deployed as a portable system to provide direct visualization of machining data streams. The data streams can differ in many ways as do the various visualization tasks. For the second case, data streams are homogenous, high-rate, and query operations are much more computation-demanding. For both applications, data is visualized in real-time, and VisDM is capable of sufficiently high update frequencies for processing and visualizing the streaming data without obstructions.

    The uniqueness of VisDM is the combination of a powerful and versatile DSMS with visually programmed and completely customizable visualization, while maintaining the complete extensibility of both.

  • 30.
    Norlander, Olof
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Analysis of stock forum texts to examine correlation to stock prices2016Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In this thesis, four methods of classification from statistical learning have been used to examine correlations between stock forum discussions and stock prices. The classifiers Naive Bayes, support vector machine, AdaBoost and random forest, were used on text data from two different stock forums to see if the text had any predictive power for the stock price of five different companies. The volatility and the direction of the price - whether it would go up or down - over a day was measured. The highest accuracy obtained for predicting high or low volatility came from random forest and was 85.2 %. For price difference the highest accuracy was 69.2 %, using the support vector machine. The average accuracy for predicting the price difference was 58.6 % and the average accuracy for predicting the volatility was 73.4 %. This thesis was made in collaboration with the company Scila which works with stock market security.

  • 31.
    Norrmalm, Thomas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Achieving Lean Software Development: Implementation of Agile and Lean Practices in a Manufacturing-Oriented Organization2011Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    The study reveals improvement areas in terms of lead time and quality in a traditionalsoftware development process of a large manufacturing-oriented organization, andidentifies four obstacles to the application of a Lean software development frameworkin order to achieve such improvements. The data from interviews are matched tofour predefined categories. These categories are evaluated using value streammapping and a framework of seven common improvement areas in softwaredevelopment. A large project and task tracking system indicate that lead time is a realproblem in the process. The most significant improvement area is wait time forchange approval meetings. A second prominent improvement area is the large amountof approval handshakes. At least a few of these handshakes are always approved, thusadding unnecessary lead time to the process.

    The four most imminent obstacles in adopting lean software development areidentified through estimating the efficiency of two in-house derivations of Scrum andKanban. The first obstacle is deep vertical but narrow horizontal expertise amongdevelopers. With some systems, there’s only one developer who knows how tomaintain the product. This makes it impossible to work as a team which is animperative principle of lean. A second obstacle is how the teams are arrangedorganizationally. They have a functional setup over three departments and threemanagers, which to some extent create a silo mentality, rendering cooperationdifficult. A third obstacle is how the teams are arranged geographically. Split over twolocations, manufacturing and headquarters, they have different customers, objectivesand a plain unfamiliarity with another that has reduced the will and opportunity tocommunicate and coordinate. A fourth obstacle is the inherent conflict between theprescriptive activities of ITIL, optimized for IT operational services,  and theadaptability of agile methodologies, optimized for rapid change and empiricaldecisions. ITIL fulfills a sometimes uncalled for need to get all changes approvedthrough several layers of management.

    The study concludes that Lean software development is in conflict with manytraditional values of a manufacturing organization. Although lean may be prevalent inother parts of the organization, this does not necessarily include the IT function. ITstill seems to have hard time grasping the lean concepts of flow, waste and value.

  • 32.
    Olsson, Tomas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Bootstrapping and decentralizing recommender systems2003Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    This thesis consists of three papers on recommender systems.

    The first paper addresses the problem of making decentralized recommendations using a peer-to-peer architecture. Collaborating recommender agents are connected into a network of neighbors that exchange user recommendations to find new items to recommend. We achieved a performance comparable to a centralized system.

    The second paper deals with the bootstrapping problem for centralized recommender systems. Most recommender systems learn from experience but initially there are no users and no rated items to learn from. To deal with this problem we have developed the Genesis method. The method bootstraps a recommender system with artificial user profiles sampled from a probabilistic model built from prior knowledge. The paper describes how this was done for a k-nearest neighbor recommender algorithm in the movie domain. We were able to improve the performance of a k-nearest neighbor algorithm for single predictions but not for rank ordered lists of recommendations.

    The third paper identifies a new domain for recommendations – product configuration – where new recommender algorithms are needed. A system that helps users configuring their own PCs is described. Recommendations and a cluster-based help system together with a rule-based configurator assist the users in selecting appropriate features or complete PC configurations. The configurator ensures that users cannot choose incompatible components while the recommender system adds social information based on what other users have chosen. This introduces new complexity in the recommendation process on how to combine the recommendations from the configurator and the recommender system. The paper proposes (1) three new recommender algorithms on how to make recommendations in the domain of product configuration, (2) a method for adding social recommendations to a rule-based configurator and (3) a method for applying the Genesis method in this domain. In this case the Genesis method is implemented by a Bayesian belief net that captures the designers' prior knowledge on how to configure PCs. Then instances of complete configurations are sampled from the model and added to the recommender algorithm.

  • 33.
    Ottosson, Greger
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Integration of Constraint Programming and Integer Programming for Combinatorial Optimization2000Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The last several years have seen an increasing interest in combining the models and methods of optimization with those of constraint programming. Integration of the two was initially impeded by their different cultural origins, one having developed largely in the operations research community and the other in the computer science and artificial intelligence communities. The advantages of merger, however, are rapidly overcomingthis barrier.

    The main objective for an integration of Constraint Programming over finite domains (CP) and Integer Programming (IP) is to take advantage of both the inference through constraint propagation and the (continuous) relaxations through Linear Programming (LP), in order to reduce the search needed to find feasible, good and optimal solutions.

    The key decisions to be made for integrating CP and IP are (a) the model(s), (b) the inference, (c) the relaxations, and, (d) the search and branching strategies to use. In this thesis it is advocated to model specifically for a hybrid solver, having part of the model operated on by CP inference and part of the model constituting an LP relaxation. We propose mixed (global) constraints spanning both discrete and continuous variables to bridge the gap between CP and LP, providing inference as well as information for search strategies. Inference comes as cutting planes and domains reductions for LPand CP respectively. Branching is done in the CP part, utilizing information from the LP relaxation.

    Apart from a general framework for integration, specific constraints and structures studied include several variations of variable subscripts and piecewise linear functions. Computational experiments show the benefit and potential of a hybrid scheme, illustrated on production planning and configuration problems.

  • 34.
    Petrini, Johan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Querying RDF Schema Views of Relational Databases2008Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    The amount of data found on the web today and its lack of semantics makes it increasingly harder to retrieve a particular piece of information. With the Resource Description Framework (RDF) every piece of information can be annotated with properties describing its semantics. The higher level language RDF Schema (RDFS) is defined in terms of RDF and provides means to describe classes of RDF resources and properties defined over these classes. Queries over RDFS data can be specified using the standard query language SPARQL. Since the majority of information in the world still resides in relational databases it should be investigated how to view and query their contents as views defined in terms of RDFS meta-data descriptions. However, processing of queries to general RDFS views over relational databases is challenging since the queries and view definitions are complex and the amount of data often is huge. A system, Semantic Web Abridged Relational Databases (SWARD), is developed to enable efficient processing of SPARQL queries to RDFS views of data in existing relational databases. The RDFS views, called universal property views (UPVs), are automatically generated provided a minimum of user input. A UPV is a general RDFS view of a relational database representing both its schema and data. Special attention is devoted to making the UPV represent as much of the relational database semantics as possible, including foreign and composite keys. A general query reduction algorithm, called PARtial evaluation of Queries (PARQ), for queries over complex views, such as UPVs, has been developed. The reduction algorithm is based on the program transformation technique partial evaluation. For UPVs, the PARQ algorithm is shown to elegantly reduce queries dramatically before regular cost-based optimization by a back-end relational DBMS. The results are verified by performance measurements of SPARQL queries to a commercial relational database.

  • 35.
    Raabjerg, Palle
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Extending psi-calculi and their formal proofs2012Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Psi-calculi is a parametric framework for extensions of the pi-calculus, with arbitrary data structures and logical assertions for facts about data. This thesis presents broadcast psi-calculi and higher-order psi-calculi, two extensions of the psi-calculi framework, allowing respectively one-to-many communications and the use of higher-order process descriptions through conditions in the parameterised logic. Both extensions preserve the purity of the psi-calculi semantics; the standard congruence and structural properties of bisimilarity are proved formally in Isabelle. The work going into the extensions show that depending on the specific extension, working out the formal proofs can be a work-intensive process. We find that some of this work could be automated, and implementing such automation may facilitate the development of future extensions to the psi-calculi framework.

    Delarbeten
    1. Broadcast Psi-calculi with an Application to Wireless Protocols
    Öppna denna publikation i ny flik eller fönster >>Broadcast Psi-calculi with an Application to Wireless Protocols
    Visa övriga...
    2011 (Engelska)Ingår i: Software Engineering and Formal Methods: SEFM 2011, Springer Berlin/Heidelberg, 2011, 74-89 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Psi-calculi is a parametric framework for extensions of the pi-calculus, with arbitrary data structures and logical assertions for facts about data. In this paper we add primitives for broadcast communication % to the psi-calculi framework. in order to model wireless protocols. The additions preserve the purity of the psi-calculi semantics, and we formally prove the standard congruence and structural properties of bisimilarity. We demonstrate the expressive power of broadcast psi-calculi by modelling the wireless ad-hoc routing protocol LUNAR and verifying a basic reachability property.

    Ort, förlag, år, upplaga, sidor
    Springer Berlin/Heidelberg, 2011
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 7041
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-156736 (URN)10.1007/978-3-642-24690-6_7 (DOI)978-3-642-24689-0 (ISBN)
    Konferens
    9th International Conference on Software Engineering and Formal Methods, November 14-18, 2011, Montevideo, Uruguay
    Projekt
    ProFuNUPMARC
    Tillgänglig från: 2011-12-06 Skapad: 2011-08-08 Senast uppdaterad: 2016-02-25Bibliografiskt granskad
    2. Higher-order psi-calculi
    Öppna denna publikation i ny flik eller fönster >>Higher-order psi-calculi
    2014 (Engelska)Ingår i: Mathematical Structures in Computer Science, ISSN 0960-1295, E-ISSN 1469-8072, Vol. 24, nr 2, e240203Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Psi-calculi is a parametric framework for extensions of the pi-calculus; in earlier work we have explored their expressiveness and algebraic theory. In this paper we consider higher-order psi-calculi through a technically surprisingly simple extension of the framework, and show how an arbitrary psi-calculus can be lifted to its higher-order counterpart in a canonical way. We illustrate this with examples and establish an algebraic theory of higher-order psi-calculi. The formal results are obtained by extending our proof repositories in Isabelle/Nominal.

    Ort, förlag, år, upplaga, sidor
    Cambridge University Press, 2014
    Nyckelord
    process calculi, psi calculi, isabelle, theorem proving, nominal, higher-order
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-183610 (URN)10.1017/S0960129513000170 (DOI)000343643500003 ()
    Projekt
    UPMARCProFuN
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF)
    Tillgänglig från: 2013-06-24 Skapad: 2012-10-30 Senast uppdaterad: 2016-08-26Bibliografiskt granskad
  • 36.
    Rorsman, Albin
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Framtida IT-lösningar för ökad rörlighet på CAD-intensivt multidisciplinärt konsultföretag2016Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    This thesis investigates possible future IT solutions for increased mobility of CAD-intensive multi-disciplinary consulting companies. The development of new IT solutions is quickly moving forward, and new ways to implement IT in businesses are constantly being launched onto the market. In the wake of new IT solutions with focus on increased mobility making themselves known, the interest from companies to implement these into their businesses has increased. By investigating the engineering consultant company Bjerking AB and how it operates from an IT point-of-view, this thesis presents possible IT solutions for increased mobility for an engineering consulting company operating in the AEC-sector. By conducting interviews with key users, observing meetings and researching the current state of Cloud Computing and hardware virtualization this thesis shows that and implementation of an IT solutions based on Cloud Computing and hardware virtualization is possible. By implementing a cloud-based IT solution using a private cloud distribution model and hardware-assisted GPU virtualization, a CAD-intensive consulting company can achieve increased mobility without lowered performance and at the same time maintain a high level of data of security. 

  • 37.
    Sabesan, Manivasakan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Querying mediated web services2007Licentiatavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Web services provide a framework for data interchange between applications by incorporating standards such as XMLSchema, WSDL, SOAP, HTTP etc. They define operations to be invoked over a network to perform the actions. These operations are described publicly in a WSDL document with the data types of their argument and result. Searching data accessible via web services is essential in many applications. However, web services don’t provide any general query language or view capabilities. Current web services applications to access the data must be developed using a regular programming language such Java, or C#.

    The thesis provides an approach to simplify querying web services data and proposes efficient processing of database queries to views of wrapped web services. To show the effectiveness of the approach, a prototype, webService MEDiator system (WSMED), is developed.

    WSMED provides general view and query capabilities over data accessible through web services by automatically extracting basic meta-data from WSDL descriptions. Based on imported meta-data, the user can then define views that extract data from the results of calls to web service operations. The views can be queried using SQL. A given view can access many different web service operations in different ways depending on what view attributes are known. The views can be specified in terms of several declarative queries to be applied by the query processor. In addition, the user can provide semantic enrichments of the meta-data with key constraints to enable efficient query execution over the views by automatic query transformations. We evaluated the effectiveness of our approach over multilevel views of existing web services and show that the key constraint enrichments substantially improve query performance.

  • 38.
    Sabesan, Manivasakan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Querying Data Providing Web Services2010Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Web services are often used for search computing where data is retrieved from servers providing information of different kinds. Such data providing web services return a set of objects for a given set of parameters without any side effects. There is need to enable general and scalable search capabilities of data from data providing web services, which is the topic of this Thesis.

    The Web Service MEDiator (WSMED) system automatically provides relational views of any data providing web service operations by reading the WSDL documents describing them. These views can be queried with SQL. Without any knowledge of the costs of executing specific web service operations the WSMED query processor automatically and adaptively finds an optimized parallel execution plan calling queried data providing web services.

    For scalable execution of queries to data providing web services, an algebra operator PAP adaptively parallelizes calls in execution plans to web service operations until no significant performance improvement is measured, based on monitoring the flow from web service operations without any cost knowledge or extensive memory usage.

    To comply with the Everything as a Service (XaaS) paradigm WSMED itself is implemented as a web service that provides web service operations to query and combine data from data providing web services. A web based demonstration of the WSMED web service provides general SQL queries to any data providing web service operations from a browser.

    WSMED assumes that all queried data sources are available as web services. To make any data providing system into a data providing web service WSMED includes a subsystem, the web service generator, which generates and deploys the web service operations to access a data source. The WSMED web service itself is generated by the web service generator.

    Delarbeten
    1. Web Service Mediation Through Multi-level Views
    Öppna denna publikation i ny flik eller fönster >>Web Service Mediation Through Multi-level Views
    2007 (Engelska)Ingår i: Proceedings of the CAiSE'07 Workshops and Doctoral Consortium: Vol. 2 UMICS, AOIS, WSIM, Doctoral Consortium, Trondheim, Norway: Tapir Academic Press , 2007, 755-766 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    The web Service MEDiator system (WSMED) provides general query capabilities over data accessible through web services by reading WSDL meta-data descriptions. Based on imported meta-data, the user can define views that extract data from the results of calls to web service operations. The views can be queried using SQL. The views are specified in terms of declarative queries that access different web service operations in different ways depending on what view attributes are known in a query. To enable efficient query execution over the views by automatic query transformations the user can provide semantic enrichments of the meta-data with key constraints. We evaluated the effectiveness of our approach over multi-level views of existing web services and show that the key constraint enrichments substantially improve query performance.

    Ort, förlag, år, upplaga, sidor
    Trondheim, Norway: Tapir Academic Press, 2007
    Nyckelord
    web service views, query optimization, semantic enrichment
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-10969 (URN)978-82-519-2246-3 (ISBN)
    Projekt
    Sida
    Tillgänglig från: 2007-08-16 Skapad: 2007-08-16 Senast uppdaterad: 2010-08-30Bibliografiskt granskad
    2. Adaptive Parallelization of Queries over Dependent Web Service Calls
    Öppna denna publikation i ny flik eller fönster >>Adaptive Parallelization of Queries over Dependent Web Service Calls
    2009 (Engelska)Ingår i: Proc. 25th International Conference on Data Engineering, Piscataway, NJ: IEEE , 2009, 1725-1732 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We have developed a system to process database queries over composed data providing web services. The queries are transformed into execution plans containing an operator that invokes any web service for given arguments. A common pattern in these query execution plans is that the output of one web service call is the input for another, etc. The challenge addressed in this paper is to develop methods to speed up such dependent calls in queries by parallelization. Since web service calls incur high-latency and message set-up costs, a naïve approach making the calls sequentially is time consuming and parallel invocations of the web service calls should improve the speed. Our approach automatically parallelizes the web service calls by starting separate query processes, each managing a parameterized sub-query, a plan function, for different parameter tuples. For a given query, the query processes are automatically arranged in a multi-level process tree where plan functions are called in parallel. The parallel plan is defined in terms of an algebra operator, FF_APPLYP, to ship in parallel to other query processes the same plan function for different parameters. By using FF_APPLYP we first investigated ways to set up different process trees manually. We concluded from our experiments that the best performing query execution plan is an almost balanced bushy tree. To automatically achieve the optimal process tree we modified FF_APPLYP to an operator AFF_APPLYP that adapts a parallel plan locally in each query process until an optimized performance is achieved. AFF_APPLYP starts with a binary process tree. During execution each query process in the tree makes local decisions to expand or shrink its process sub-tree by comparing the average time to process each incoming tuple. The query execution time obtained with AFF_APPLYP is shown to be close to the best time achieved by manually built query process trees.

    Ort, förlag, år, upplaga, sidor
    Piscataway, NJ: IEEE, 2009
    Nationell ämneskategori
    Datavetenskap (datalogi) Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-100901 (URN)10.1109/ICDE.2009.148 (DOI)978-1-4244-3422-0 (ISBN)
    Projekt
    SIDA and Swedish Foundation for Strategic Research under contract RIT08-0041
    Tillgänglig från: 2009-04-14 Skapad: 2009-04-10 Senast uppdaterad: 2010-08-30Bibliografiskt granskad
    3. Adaptive Parallelization of Queries to Data Providing Web Service Operations
    Öppna denna publikation i ny flik eller fönster >>Adaptive Parallelization of Queries to Data Providing Web Service Operations
    2012 (Engelska)Ingår i: Transactions on Large-Scale Data- and Knowledge-Centered Systems V, Springer , 2012, 49-69 s.Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    A data providing web service operation returns a collection of objects for given parameters without any side effects. The Web Service MEDiator (WSMED) system automatically provides relational views of any data providing web service operations by reading their WSDL documents. These views are queried with SQL. In an execution plan, a call to a data providing web service operation may depend on the results from other web service operation calls. In other cases, different web service calls are independent of each other and can be called in any order. To parallelize and speed up both dependent and independent web service operation calls, WSMED has been extended with the adaptive operator PAP. It incrementally parallelizes calls to web service operations until no significant performance improvement is measured. The performance of PAP is evaluated using publicly available web services. The operator substantially improves the query performance without knowing the cost of calling a web service operation and without extensive memory usage.

    Ort, förlag, år, upplaga, sidor
    Springer, 2012
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 7100
    Nyckelord
    Web service composition, Adaptive parallelization, Query optimization
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-128857 (URN)10.1007/978-3-642-28148-8_3 (DOI)978-3-642-28147-1 (ISBN)
    Projekt
    Swedish Foundation for Strategic Research under contract RIT08-0041
    Forskningsfinansiär
    eSSENCE - An eScience Collaboration
    Anmärkning

    accepted for publication

    Tillgänglig från: 2010-07-27 Skapad: 2010-07-27 Senast uppdaterad: 2013-02-28Bibliografiskt granskad
    4. Automated Web Service Query Service
    Öppna denna publikation i ny flik eller fönster >>Automated Web Service Query Service
    2010 (Engelska)Ingår i: International Journal of Web and Grid Services (IJWGS), ISSN 1741-1106, Vol. 6, nr 4, 400-423 s.Artikel i tidskrift (Refereegranskat) Published
    Nationell ämneskategori
    Datavetenskap (datalogi) Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-128852 (URN)10.1504/IJWGS.2010.036405 (DOI)000285771300004 ()
    Projekt
    Swedish Foundation for Strategic Research under contract RIT08-0041SidaeSSENCE
    Tillgänglig från: 2010-07-27 Skapad: 2010-07-27 Senast uppdaterad: 2011-02-23Bibliografiskt granskad
    5. Querying Mediated Web Services
    Öppna denna publikation i ny flik eller fönster >>Querying Mediated Web Services
    2006 (Engelska)Ingår i: Proceedings of the IITC Conference 2006: 8th International Information Technology Conference 2006, Sri Lanka: Infotel Lanka Society Ltd , 2006, 39-44 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Web services provide interfaces to web resources described by WSDL interface definitions. The Web Service MEDiator system (WSMED) enables querying data accessible through web services. WSMED allows web service meta-data to be automatically extracted from any WSDL description. Then views can be created in terms of the imported meta-data and queried using SQL. To enhance query performance, WSMED permits to complement the automatically extracted web service meta-data with semantic enrichments. A WSMED prototype is being evaluated over existing web services to verify the effectiveness of the approach. It is being investigated how semantic enrichments and other query optimization methods are useful for efficient querying of mediated web services.

    Ort, förlag, år, upplaga, sidor
    Sri Lanka: Infotel Lanka Society Ltd, 2006
    Nyckelord
    Web Services, Views, Query Processing, Binding Patterns
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-20019 (URN)955-8974-04-8 (ISBN)
    Projekt
    Sida
    Tillgänglig från: 2007-08-16 Skapad: 2007-08-16 Senast uppdaterad: 2010-08-30Bibliografiskt granskad
    6. Querying mediated web services
    Öppna denna publikation i ny flik eller fönster >>Querying mediated web services
    2007 (Engelska)Licentiatavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Web services provide a framework for data interchange between applications by incorporating standards such as XMLSchema, WSDL, SOAP, HTTP etc. They define operations to be invoked over a network to perform the actions. These operations are described publicly in a WSDL document with the data types of their argument and result. Searching data accessible via web services is essential in many applications. However, web services don’t provide any general query language or view capabilities. Current web services applications to access the data must be developed using a regular programming language such Java, or C#.

    The thesis provides an approach to simplify querying web services data and proposes efficient processing of database queries to views of wrapped web services. To show the effectiveness of the approach, a prototype, webService MEDiator system (WSMED), is developed.

    WSMED provides general view and query capabilities over data accessible through web services by automatically extracting basic meta-data from WSDL descriptions. Based on imported meta-data, the user can then define views that extract data from the results of calls to web service operations. The views can be queried using SQL. A given view can access many different web service operations in different ways depending on what view attributes are known. The views can be specified in terms of several declarative queries to be applied by the query processor. In addition, the user can provide semantic enrichments of the meta-data with key constraints to enable efficient query execution over the views by automatic query transformations. We evaluated the effectiveness of our approach over multilevel views of existing web services and show that the key constraint enrichments substantially improve query performance.

    Ort, förlag, år, upplaga, sidor
    Uppsala universitet, 2007
    Serie
    IT licentiate theses / Uppsala University, Department of Information Technology, ISSN 1404-5117 ; 2007-001
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-100903 (URN)
    Handledare
    Tillgänglig från: 2007-02-05 Skapad: 2009-04-10 Senast uppdaterad: 2014-08-07Bibliografiskt granskad
    7. Web Service Query Service
    Öppna denna publikation i ny flik eller fönster >>Web Service Query Service
    2009 (Engelska)Ingår i: Proc. 11th International Conference on Information Integration and Web-based Applications and Services: iiWAS2009, New York: ACM Press , 2009, 692-697 s.Konferensbidrag (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    New York: ACM Press, 2009
    Serie
    books@ocg.at, 260
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-112048 (URN)978-1-60558-660-1 (ISBN)
    Projekt
    SSPISida
    Tillgänglig från: 2010-01-07 Skapad: 2010-01-07 Senast uppdaterad: 2010-08-30Bibliografiskt granskad
    8. Adaptive parallelization of queries calling dependent data providing web services
    Öppna denna publikation i ny flik eller fönster >>Adaptive parallelization of queries calling dependent data providing web services
    2011 (Engelska)Ingår i: New Frontiers in Information and Software as Services: Service and Application Design Challenges in the Cloud, Berlin: Springer-Verlag , 2011, 132-154 s.Kapitel i bok, del av antologi (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    Berlin: Springer-Verlag, 2011
    Serie
    Lecture Notes in Business Information Processing, 74
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-128858 (URN)10.1007/978-3-642-19294-4_6 (DOI)978-3-642-19293-7 (ISBN)
    Projekt
    Swedish Foundation for Strategic Research under contract RIT08-0041SidaeSSENCE
    Tillgänglig från: 2010-07-27 Skapad: 2010-07-27 Senast uppdaterad: 2011-02-23Bibliografiskt granskad
  • 39.
    Sandberg, Sven
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Games and Probabilistic Infinite-State Systems2007Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Computer programs keep finding their ways into new safety-critical applications, while at the same time growing more complex. This calls for new and better methods to verify the correctness of software. We focus on one approach to verifying systems, namely that of model checking. At first, we investigate two categories of problems related to model checking: games and stochastic infinite-state systems. In the end, we join these two lines of research, by studying stochastic infinite-state games.

    Game theory has been used in verification for a long time. We focus on finite-state 2-player parity and limit-average (mean payoff) games. These problems have applications in model checking for the μ-calculus, one of the most expressive logics for programs. We give a simplified proof of memoryless determinacy. The proof applies both to parity and limit-average games. Moreover, we suggest a strategy improvement algorithm for limit-average games. The algorithm is discrete and strongly subexponential.

    We also consider probabilistic infinite-state systems (Markov chains) induced by three types of models. Lossy channel systems (LCS) have been used to model processes that communicate over an unreliable medium. Petri nets model systems with unboundedly many parallel processes. Noisy Turing machines can model computers where the memory may be corrupted in a stochastic manner. We introduce the notion of eagerness and prove that all these systems are eager. We give a scheme to approximate the value of a reward function defined on paths. Eagerness allows us to prove that the scheme terminates. For probabilistic LCS, we also give an algorithm that approximates the limit-average reward. This quantity describes the long-run behavior of the system.

    Finally, we investigate Büchi games on probabilistic LCS. Such games can be used to model a malicious cracker trying to break a network protocol. We give an algorithm to solve these games.

    Delarbeten
    1. Memoryless Determinacy of Parity and Mean Payoff Games: A Simple Proof
    Öppna denna publikation i ny flik eller fönster >>Memoryless Determinacy of Parity and Mean Payoff Games: A Simple Proof
    2004 Ingår i: Theoretical Computer Science, ISSN 0304-3975, Vol. 310, nr 1-3, 365-378 s.Artikel i tidskrift (Refereegranskat) Published
    Identifikatorer
    urn:nbn:se:uu:diva-95518 (URN)
    Tillgänglig från: 2007-03-02 Skapad: 2007-03-02Bibliografiskt granskad
    2. A Combinatorial Strongly Subexponential Strategy Improvement Algorithm for Mean Payoff Games
    Öppna denna publikation i ny flik eller fönster >>A Combinatorial Strongly Subexponential Strategy Improvement Algorithm for Mean Payoff Games
    2004 Ingår i: Proceedings of the 29th International Symposium on Mathematical Foundations of Computer Science (MFCS 2004), 2004, 673-685 s.Kapitel i bok, del av antologi (Övrigt vetenskapligt) Published
    Identifikatorer
    urn:nbn:se:uu:diva-95519 (URN)
    Tillgänglig från: 2007-03-02 Skapad: 2007-03-02Bibliografiskt granskad
    3. Eager Markov Chains
    Öppna denna publikation i ny flik eller fönster >>Eager Markov Chains
    2006 Ingår i: Proceedings of the 4th International Symposium on Automated Technology for Verification and Analysis (ATVA 2006), 2006, 24-38 s.Kapitel i bok, del av antologi (Övrigt vetenskapligt) Published
    Identifikatorer
    urn:nbn:se:uu:diva-95520 (URN)
    Tillgänglig från: 2007-03-02 Skapad: 2007-03-02Bibliografiskt granskad
    4. Limiting Behavior of Markov Chains with Eager Attractors
    Öppna denna publikation i ny flik eller fönster >>Limiting Behavior of Markov Chains with Eager Attractors
    2006 Ingår i: Proceedings of the 3rd International Conference on the Quantitative Evaluation of SysTems (QEST 2006), 2006, 253--262 s.Kapitel i bok, del av antologi (Övrigt vetenskapligt) Published
    Identifikatorer
    urn:nbn:se:uu:diva-95521 (URN)
    Tillgänglig från: 2007-03-02 Skapad: 2007-03-02Bibliografiskt granskad
    5. Stochastic Games with Lossy Channels
    Öppna denna publikation i ny flik eller fönster >>Stochastic Games with Lossy Channels
    Manuskript (Övrigt vetenskapligt)
    Identifikatorer
    urn:nbn:se:uu:diva-95522 (URN)
    Forskningsfinansiär
    Tillgänglig från: 2007-03-02 Skapad: 2007-03-02 Senast uppdaterad: 2011-01-17
  • 40.
    Scott, Joseph
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Other Things Besides Number: Abstraction, Constraint Propagation, and String Variable Types2016Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Constraint programming (CP) is a technology in which a combinatorial problem is modeled declaratively as a conjunction of constraints, each of which captures some of the combinatorial substructure of the problem. Constraints are more than a modeling convenience: every constraint is partially implemented by an inference algorithm, called a propagator, that rules out some but not necessarily all infeasible candidate values of one or more unknowns in the scope of the constraint. Interleaving propagation with systematic search leads to a powerful and complete solution method, combining a high degree of re-usability with natural, high-level modeling.

    A propagator can be characterized as a sound approximation of a constraint on an abstraction of sets of candidate values; propagators that share an abstraction are similar in the strength of the inference they perform when identifying infeasible candidate values. In this thesis, we consider abstractions of sets of candidate values that may be described by an elegant mathematical formalism, the Galois connection. We develop a theoretical framework from the correspondence between Galois connections and propagators, unifying two disparate views of the abstraction-propagation connection, namely the oft-overlooked distinction between representational and computational over-approximations. Our framework yields compact definitions of propagator strength, even in complicated cases (i.e., involving several types, or unknowns with internal structure); it also yields a method for the principled derivation of propagators from constraint definitions.

    We apply this framework to the extension of an existing CP solver to constraints over strings, that is, words of finite length. We define, via a Galois connection, an over-approximation for bounded-length strings, and demonstrate two different methods for implementing this overapproximation in a CP solver. First we use the Galois connection to derive a bounded-length string representation as an aggregation of existing scalar types; propagators for this representation are obtained by manual derivation, or automated synthesis, or a combination. Then we implement a string variable type, motivating design choices with knowledge gained from the construction of the over-approximation. The resulting CP solver extension not only substantially eases modeling for combinatorial string problems, but also leads to substantial efficiency improvements over prior CP methods.

  • 41.
    Somla, Rafał
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Logics and Algorithms for Verification of Concurrent Systems2012Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    In this thesis we investigate how the known framework of automatic formal verification by model checking can be extended in different directions. One extension is to go beyond the common limitation of the existing specification formalisms, that they can describe only regular properties of components. This can be achieved using logics capable of expressing non-regular properties, such as the Propositional Dynamic Logic of Context-free Programs (PDLCF), Fixpoint Logic with Chop (FLC) or the Higher-order Fixpoint Logic (HFL). Our main result in this area is proving that the problem of model checking HFL formulas of order bounded by k is k-EXPTIME complete. In the proofs we demonstrate two model checking algorithms for that logic. We also show that PDLCF is equivalent to a proper fragment of FLC.

    The standard model checking algorithms, which are run on a single computer, are severely limited by the amount of available computing resources. A way to overcome this limitation is to develop distributed algorithms, which can be run on a cluster of computers and use their joint resources. In this thesis we show how a distributed model checking algorithm for the alternation-free fragment of the modal μ-calculus can be extended to handle formulas with one level of alternation. This is an important extension, since Lμ formulas with one level of alternation can express the same properties as logics LTL and CTL commonly used in formal verification.

    Finally, we investigate stochastic games which can be used to model additional aspects of components, such as their interaction with environment and their quantitative properties. We describe new algorithms for finding optimal values and strategies in turn-based stochastic games with reachability winning conditions. We prove their correctness and report on experiments where we compare them against each other and against other known algorithms, such as value iteration and strategy improvement.

    Delarbeten
    1. Parallel model checking for LTL, CTL* and Lmu2
    Öppna denna publikation i ny flik eller fönster >>Parallel model checking for LTL, CTL* and Lmu2
    2003 (Engelska)Ingår i: PDMC 2003: 2nd International Workshop on Parallel and Distributed Model Checking, 2003, 4-16 s.Konferensbidrag (Refereegranskat)
    Nationell ämneskategori
    Datorteknik Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-48668 (URN)
    Tillgänglig från: 2006-12-15 Skapad: 2006-12-15 Senast uppdaterad: 2013-01-22
    2. The Complexity of Model Checking Higher Order Fixpoint Logic
    Öppna denna publikation i ny flik eller fönster >>The Complexity of Model Checking Higher Order Fixpoint Logic
    2005 (Engelska)Ingår i: Mathematical Foundations of Computer Science 2005: 30th International Symposium, MFCS 2005, Gdansk, Poland, August 29–September 2, 2005. Proceedings, 2005Konferensbidrag (Refereegranskat)
    Identifikatorer
    urn:nbn:se:uu:diva-80124 (URN)doi:10.1007/11549345_55 (DOI)3-540-28702-7 (ISBN)
    Tillgänglig från: 2006-04-25 Skapad: 2006-04-25 Senast uppdaterad: 2013-01-22
    3. Propositional dynamic logic of context-free programs and fixpoint logic with chop
    Öppna denna publikation i ny flik eller fönster >>Propositional dynamic logic of context-free programs and fixpoint logic with chop
    2006 (Engelska)Ingår i: Information Processing Letters, ISSN 0020-0190, E-ISSN 1872-6119, Vol. 100, nr 2, 72-75 s.Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    This paper compares propositional dynamic logic of non-regular programs and fixpoint logic with chop. It identifies a fragment of the latter which is equi-expressive to the former. This relationship transfers several decidability and complexity results between the two logics.

    Nyckelord
    program specification, formal languages, concurrency
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-107987 (URN)10.1016/j.ipl.2006.04.019 (DOI)000240296300007 ()
    Tillgänglig från: 2009-09-02 Skapad: 2009-09-02 Senast uppdaterad: 2013-01-22Bibliografiskt granskad
    4. The Complexity of Model Checking Higher-order Fixpoint Logic
    Öppna denna publikation i ny flik eller fönster >>The Complexity of Model Checking Higher-order Fixpoint Logic
    2007 (Engelska)Ingår i: Logical Methods in Computer Science, ISSN 1860-5974, Vol. 3, nr 2:7, 1-33 s.Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Higher-Order Fixpoint Logic (HFL) is a hybrid of the simply typed λ-calculus and the modal μ-calculus. This makes it a highly expressive temporal logic that is capable of expressing various interesting correctness properties of programs that are not expressible in the modal μ-calculus. This paper provides complexity results for its model checking problem. In particular we consider those fragments of HFL built by using only types of bounded order k and arity m. We establish k-fold exponential time completeness for model checking each such fragment. For the upper bound we use fixpoint elimination to obtain reachability games that are singly-exponential in the size of the formula and k-fold exponential in the size of the underlying transition system. These games can be solved in deterministic linear time. As a simple consequence, we obtain an exponential time upper bound on the expression complexity of each such fragment. The lower bound is established by a reduction from the word problem for alternating (k-1)-fold exponential space bounded Turing Machines. Since there are fixed machines of that type whose word problems are already hard with respect to k-fold exponential time, we obtain, as a corollary, k-fold exponential time completeness for the data complexity of our fragments of HFL, provided m exceeds 3. This also yields a hierarchy result in expressive power.

    Nyckelord
    mu-calculis, lambda-calculus, model checking, complexity
    Nationell ämneskategori
    Data- och informationsvetenskap
    Identifikatorer
    urn:nbn:se:uu:diva-107983 (URN)10.2168/lmcs-3(2:7)2007 (DOI)000262497200006 ()
    Tillgänglig från: 2009-09-02 Skapad: 2009-09-02 Senast uppdaterad: 2013-01-22Bibliografiskt granskad
    5. Accelerated Approximation for Stochastic Reachability Games: Extended version of paper New algorithms for solving simple stochastic games
    Öppna denna publikation i ny flik eller fönster >>Accelerated Approximation for Stochastic Reachability Games: Extended version of paper New algorithms for solving simple stochastic games
    2010 (Engelska)Övrigt (Övrigt vetenskapligt)
    Abstract [en]

    In this paper new algorithms for finding optimal values and strategies inturn-based stochastic games with reachability objectives are presented,whose special case are the simple stochastic games considered elsewhere [4,11]. The general idea of these algorithms is to accelerate the successive approximation scheme for solving stochastic games [13] in which node values are updated in each iteration so that they converge to the optimal values of the game. This scheme is extended with a pair of positional strategies which are updated to remain greedy with respect to the current approximation. This way optimal strategies can be discovered before the current values get close to the optimal ones. The approximation process is accelerated, by predicting an approximate result of several updates of the current valuation and jumping directly to the predicted values.

    New algorithms are based on three different acceleration techniques: iterative squaring, linear extrapolation, and linear programming; with different difficulty of performing single iteration and different acceleration level achieved by each of them. For each of these algorithms the complexity of a single iteration is polynomial. It is shown that accelerated algorithms will never perform worse than the basic, non-accelerated one and exponential upper bounds on the number of iterations required to solve a game in the worst case is given. It is also proven that new algorithms increase the frequency with which the greedy strategies are updated. The more often strategies are updated, the higher chances that the algorithm will terminate early. It is proven that the algorithm based on linear programming updates the greedy strategies in every iteration, which makes it similar to the strategy improvement method, where also a new strategy is found in each iteration and this also at the cost of solving linear constraint problems

    Paper is concluded with presentation of results of experiments in which new algorithms were run on a sample of randomly generated games. It could be observed that the proposed acceleration techniques reduce the number of iterations of the basic algorithm by an order of magnitude and that they substantially increase frequency with which the greedy strategies are updated. The algorithms based on linear programming and linear extrapolation displayed similar efficiency as the ones based on strategy improvement. This makes the algorithm based on linear extrapolation especially attractive because it uses much simpler computations than linear constraint solving.

    Nyckelord
    simple stochastic games, successive approximation, strategy improvement, complexity
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap
    Identifikatorer
    urn:nbn:se:uu:diva-179845 (URN)
    Anmärkning

    The original paper was published in Proceedings of the Workshop on Games in Design and Verification (GDV 2004), volume 119 of Electronic Notes in Theoretical Computer Science, pages 51–65. Elsevier. http://dx.doi.org/10.1016/j.entcs.2004.07.008

    Tillgänglig från: 2012-08-24 Skapad: 2012-08-24 Senast uppdaterad: 2013-01-22Bibliografiskt granskad
  • 42.
    Stefanova, Silvia
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Scalable Preservation, Reconstruction, and Querying of Databases in terms of Semantic Web Representations2013Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    This Thesis addresses how Semantic Web representations, in particular RDF, can enable flexible and scalable preservation, recreation, and querying of databases.

    An approach has been developed for selective scalable long-term archival of relational databases (RDBs) as RDF, implemented in the SAQ (Semantic Archive and Query) system. The archival of user-specified parts of an RDB is specified using an extension of SPARQL, A-SPARQL. SAQ automatically generates an RDF view of the RDB, the RD-view. The result of an archival query is RDF triples stored in: i) a data archive file containing the preserved RDB content, and ii) a schema archive file containing sufficient meta-data to reconstruct the archived database. To achieve scalable data preservation and recreation, SAQ uses special query rewriting optimizations for the archival queries. It was experimentally shown that they improve query execution and archival time compared with naïve processing. The performance of SAQ was compared with that of other systems supporting SPARQL queries to views of existing RDBs.

    When an archived RDB is to be recreated, the reloader module of SAQ first reads the schema archive file and executes a schema reconstruction algorithm to automatically construct the RDB schema. The thus created RDB is populated by reading the data archive and converting the read data into relational attribute values. For scalable recreation of RDF archived data we have developed the Triple Bulk Load (TBL) approach where the relational data is reconstructed by using the bulk load facility of the RDBMS. Our experiments show that the TBL approach is substantially faster than the naïve Insert Attribute Value (IAV) approach, despite the added sorting and post-processing.

    To view and query semi-structured Topic Maps data as RDF the prototype system TM-Viewer was implemented. A declarative RDF view of Topic Maps, the TM-view, is automatically generated by the TM-viewer using a developed conceptual schema for the Topic Maps data model. To achieve efficient query processing of SPARQL queries to the TM-view query rewrite transformations were developed and evaluated. It was shown that they significantly improve the query execution time.

    Delarbeten
    1. SPARQL queries to RDFS views of Topic Maps
    Öppna denna publikation i ny flik eller fönster >>SPARQL queries to RDFS views of Topic Maps
    2010 (Engelska)Ingår i: International Journal of Metadata, Semantics and Ontologies (IJMSO), ISSN 1744-2621, Vol. 5, nr 1, 1-16 s.Artikel i tidskrift (Refereegranskat) Published
    Ort, förlag, år, upplaga, sidor
    Inderscience, 2010
    Nationell ämneskategori
    Datavetenskap (datalogi) Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-142495 (URN)10.1504/IJMSO.2010.032647 (DOI)
    Projekt
    eSSENCE
    Tillgänglig från: 2010-03-31 Skapad: 2011-01-14 Senast uppdaterad: 2013-08-30Bibliografiskt granskad
    2. Optimizing Unbound-property Queries to RDF Views of RelationalDatabases
    Öppna denna publikation i ny flik eller fönster >>Optimizing Unbound-property Queries to RDF Views of RelationalDatabases
    2011 (Engelska)Konferensbidrag (Refereegranskat)
    Abstract [en]

    SAQ (Semantic Archive and Query) is a system for querying and long-term preservation of relational data in terms of RDF. In SAQ relational data in a back-end DBMS is exposed as an RDF view, called the RD-view. SAQ can process arbitrary SPARQL queries to the RD-view. In addition long-term preservation as RDF of selected parts of a relational database is specified by SPARQL queries to the RD-view. Such queries usually select sets of RDF properties and thus in the query definition a property p is unknown. We call such queries unbound-property queries. This class of queries is also present in the SPARQL benchmarks. We optimize unbound-property queries by introducing a query transformation algorithm called Group Common Terms, GCT. It pulls out from a DNF normalized query those common terms that can be translated to SQL predicates accessing the relational database. Our experiments using the Berlin SPARQL benchmark show that GCT improves substantially the query execution time to a back-end commercial relational DBMS for both selective and unselective unbound-property queries. We compared the performance of our approach with the performance of other systems processing SPARQL queries over views of relational databases and showed that GCT improves scalability compared to the approaches used by the other systems.

    Ort, förlag, år, upplaga, sidor
    Bonn, Germany: , 2011. 16 s.
    Nyckelord
    SPARQL queries, RDF views of relational databases, query optimization, query rewrites, unbound property queries
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-199569 (URN)
    Konferens
    The 7th International Workshop on Scalable Semantic Web Knowledge Base Systems (SSWS 2011) at the 10th International Semantic Web Conference (ISWC 2011), Bonn, Germany, October 24th, 2011
    Tillgänglig från: 2013-05-07 Skapad: 2013-05-07 Senast uppdaterad: 2013-08-30
    3. Scalable long-term preservation of relational data through SPARQL queries
    Öppna denna publikation i ny flik eller fönster >>Scalable long-term preservation of relational data through SPARQL queries
    2016 (Engelska)Ingår i: Semantic Web, ISSN 1570-0844, E-ISSN 2210-4968, Vol. 7, nr 2, 117-137 s.Artikel i tidskrift (Refereegranskat) Published
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:uu:diva-199570 (URN)10.3233/SW-150173 (DOI)000373208100002 ()
    Projekt
    eSSENCE
    Tillgänglig från: 2016-02-12 Skapad: 2013-05-07 Senast uppdaterad: 2016-08-03Bibliografiskt granskad
    4. Scalable reconstruction of RDF-archived relational databases
    Öppna denna publikation i ny flik eller fönster >>Scalable reconstruction of RDF-archived relational databases
    2013 (Engelska)Ingår i: Proc. 5th International Workshop on Semantic Web Information Management, New York: ACM Press, 2013, 5:1-4 s.Konferensbidrag (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    New York: ACM Press, 2013
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap med inriktning mot databasteknik
    Identifikatorer
    urn:nbn:se:uu:diva-199571 (URN)10.1145/2484712.2484717 (DOI)978-1-4503-2194-5 (ISBN)
    Konferens
    5th International Workshop on Semantic Web Information Management (SWIM 2013), June 23, New York
    Projekt
    eSSENCE
    Forskningsfinansiär
    eSSENCE - An eScience Collaboration
    Tillgänglig från: 2013-06-23 Skapad: 2013-05-07 Senast uppdaterad: 2013-12-17Bibliografiskt granskad
  • 43.
    Stenman, Erik
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Efficient Implementation of Concurrent Programming Languages2002Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Dissertation in Computer Science to be publicly examined in Häggsalen, Ångströmlaboratoriet, Uppsala University, on Friday, November 1, 2002 at 1:00 pm for the degree of doctor of philosophy. The examination will be conducted in English.

    This thesis proposes and experimentally evaluates techniques for efficient implementation of languages designed for high availability concurrent systems. This experimental evaluation has been done while developing the High Performance Erlang (HiPE) system, a native code compiler for SPARC and x86. The two main goals of the HiPE system are to provide efficient execution of Erlang programs, and to provide a research vehicle for evaluating implementation techniques for concurrent functional programming languages.

    The focus of the thesis is the evaluation of two techniques that enable inter-process optimization through dynamic compilation. The first technique is a fast register allocator called linear scan, and the second is a memory architecture where processes share memory.

    The main contributions of the thesis are:

    An evaluation of linear scan register allocation in a different language setting. In addition the performance of linear scan on the register poor x86 architecture is evaluated for the first time.

    A description of three different heap architectures (private heaps, shared heap, and a hybrid of the two), with a systematic investigation of implementation aspects and an extensive discussion on the associated performance trade-offs of the heap architectures. The description is accompanied by an experimental evaluation of the private vs. the shared heap setting.

    A novel approach to optimizing a concurrent program, by merging code from a sender with code from a receiver, is presented together with other methods for reducing the overhead of context switching.

    A description of the implementation aspects of a complete and robust native code Erlang system, which makes it possible to test compiler optimizations on real world programs.

  • 44.
    Tarasova, Natalya
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi.
    Classification of Hate Tweets and Their Reasons using SVM2016Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Denna studie fokuserar på att klassificera hat-meddelanden riktade mot mobiloperatörerna Verizon,  AT&T and Sprint. Huvudsyftet är att med hjälp av maskininlärningsalgoritmen Support Vector Machines (SVM) klassificera meddelanden i fyra kategorier - Hat, Orsak, Explicit och Övrigt - för att kunna identifiera ett hat-meddelande och dess orsak.

    Studien resulterade i två metoder: en "naiv" metod (the Naive Method, NM) och en mer "avancerad" metod (the Partial Timeline Method, PTM). NM är en binär metod i den bemärkelsen att den ställer frågan: "Tillhör denna tweet klassen Hat?". PTM ställer samma fråga men till en begränsad mängd av tweets, dvs bara de som ligger inom ± 30 min från publiceringen av hat-tweeten.

    Sammanfattningsvis indikerade studiens resultat att PTM är noggrannare än NM. Dock tar den inte hänsyn till samtliga tweets på användarens tidslinje. Därför medför valet av metod en avvägning: PTM erbjuder en noggrannare klassificering och NM erbjuder en mer utförlig klassificering.

  • 45.
    Thalmann, Lars
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
    Term-modal logic and quantifier-free dynamic assignment logic2000Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    In this dissertation, we present two new sorts of computer sciencelogics.

    Many powerful logics exist today for reasoning about multi-agentsystems, but in most of these it is hard to reason about an infiniteor indeterminate number of agents. Also the naming schemes used inthe logics often lack expressiveness to name agents in an intuitiveway.

    To obtain a more expressive language for multi-agent reasoning and abetter naming scheme for agents, we introduce in the first part of thedissertation a family of logics called term-modal logics. A mainfeature of our logics is the use of modal operators indexed by theterms of the logics. Thus, one can quantify over variables occurringin modal operators. In term-modal logics agents can be represented byterms, and knowledge of agents is expressed with formulas within thescope of modal operators.

    This gives us a flexible and uniform language for reasoning about theagents themselves and their knowledge. We give examples of theexpressiveness of the languages and provide sequent-style andtableau-based proof systems for the logics. Furthermore, we giveproofs of soundness and completeness with respect to the possibleworld semantics.

    In the second part of the dissertation, we treat another problem inreasoning about multi-agent systems, namely the problem of informationupdating. We develop a dynamic logic of assignments with a scopingoperator instead of quantifiers. Function, relation symbols and logicvariables are all rigidly interpreted in our semantics, while programvariables are non-rigid. The scoping operator is used to distinguishbetween the value of a program variable before and after the executionof a program.

    We provide a tableau proof system for the logic. First, the system isproved complete without the star operator, and then with the staroperator using an omega rule. The full logic is shown to beundecidable, while some interesting fragments are decidable.

  • 46.
    Truong, Thanh
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.