In this paper, we review previous work on the applicability and performance of integrated layer processing (ILP). ILP has been shown to clearly improve computer communication performance when integrating simple data manipulation functions, but the situati
Cache memory behaviour is becoming more and more important as the speed of CPUs is increasing faster than the speed of memories. The operation of caches are statistical which means that the system level performance becomes unpredictable. In this paper we
Many current implementations of communication subsystems on workstation class computers transfer communication data to and from primary memory several times. This is due to software copying between user and operating system address spaces, presentation la
In Delay Tolerant Networks (DTNs) as an emerging content dissemination platform, mobile nodes opportunistically exchange content as they meet, with the intent of disseminating content among nodes that share common interests. During a meeting, nodes can exchange both content of direct interest to themselves as well as content that is of interest to a larger set of nodes that may be encountered in the future. The utility of DTN is governed by the content exchange opportunity (the amount of content that can be exchanged during a meeting) as well as the selection of content to be exchanged in order to maximise the interest nodes will have in information they are exposed to. Considering that there is a cost associated with the content exchange (e.g. battery usage, buffer occupancy or consumed transmission opportunity) the aim for nodes participating in content dissemination should be to maximise their payoff. In this paper, we contribute a generic framework for describing the characteristics of content exchange among participating nodes in a network. We incorporate a distributed information popularity measurement and the pairwise interaction of nodes modelled as a bargaining problem. The outcome of this process is the fair split up of dwelling time as a network resource and the selection of which content objects to exchange in order to maximise the nodes’ payoff. The framework is generally intended to be used as a capstone for investigation of content dissemination properties and various content exchange strategies in a DTN, a topic addressed in this paper and experiments conducted to validate the function and correctness of the proposed framework.
Opportunistic networks are systems with highly distributed operation, relying on the altruistic cooperation of highly heterogeneous, and not always software and hardware-compatible, user nodes. Moreover, the absence of central coordination and control makes them vulnerable to malicious attacks. In this paper, we study the resilience of popular forwarding protocols to a representative set of challenges to their normal operation. These include jamming locally disturbing message transfer between nodes, hardware/software failures and incompatibility among nodes rendering contact opportunities useless, and free-riding phenomena. We first formulate and promote the metric envelope concept as a tool for assessing the resilience of opportunistic forwarding schemes. Metric envelopes depart from the standard practice of average value analysis and explicitly account for the differentiated challenge impact due to node heterogeneity (device capabilities, mobility) and attackers’ intelligence. We then propose heuristics to generate worst- and best-case challenge realization scenarios and approximate the lower and upper bounds of the metric envelopes. Finally, we demonstrate the methodology in assessing the resilience of three popular forwarding protocols in the presence of the three challenges, and under a comprehensive range of mobility patterns. The metric envelope approach provides better insights into the level of protection path diversity and message replication provide against different challenges, and enables more informed choices in opportunistic forwarding when network resilience becomes important.
Opportunistic networks are systems with highly distributed operation, relying on the altruistic cooperation of heterogeneous, and not always software- and hardware-compatible user nodes. Moreover, the absence of central control makes them vulnerable to malicious attacks. In this paper, we take a fresh look at the resilience of opportunistic forwarding to these challenges. In particular, we introduce and promote the use of metric envelopes as a resilience assessment tool. Metric envelopes depart from the standard practice of average value analysis and explicitly account for the differentiated impact that a challenge may have on the forwarding performance due to node heterogeneity (device capabilities, mobility) and attackers’ intelligence. The use of metric envelopes is demonstrated in the case of three challenges: jamming, hardware/software failures and incompatibilities, and free-riding phenomena. For each challenge, we first devise heuristics to generate worst- and best-case realization scenarios that can approximate the metric envelopes. Then we derive the envelopes of common performance metrics for three popular forwarding protocols under a comprehensive range of mobility patterns. The metric envelope approach enables more informed choices in opportunistic forwarding whenever network resilience considerations become important.
Two major performance bottlenecks in multiprocessor execution of protocols are contention for shared memory and for locks. Locks are used to protect shared messages and/or shared protocol state in a memory shared by competing processors. Mutual exclusion
In the early days of cloud computing, datacenters were sparsely deployed at distant locations far from end-users with high end-toend communication latency. However, today's cloud datacenters have become more geographically spread, the bandwidth of the networks keeps increasing, pushing the end-users latency down. In this paper, we provide a comprehensive cloud reachability study as we perform extensive global client-to-cloud latency measurements towards 189 datacenters from all major cloud providers. We leverage the well-known measurement platform RIPE Atlas, involving up to 8500 probes deployed in heterogeneous environments, e.g., home and offices. Our goal is to evaluate the suitability of modern cloud environments for various current and predicted applications. We achieve this by comparing our latency measurements against known human perception thresholds and are able to draw inferences on the suitability of current clouds for novel applications, such as augmented reality. Our results indicate that the current cloud coverage can easily support several latency-critical applications, like cloud gaming, for the majority of the world's population.
We study edge server support for multiple periodic real-time applications located in different clouds. The edge communicates both with sensor devices over wireless sensor networks and with applications over Internet type networks. The edge caches sensor data and can respond to multiple applications with different timing requirements to the data. The purpose of caching is to reduce the number of multiple direct accesses to the sensor since sensor communication is very energy expensive. However, the data will then age in the cache and eventually become stale for some application. A push update method and the concept of age of information is used to schedule data updates to the applications. An aging model for periodic updates is derived. We propose that the scheduling should take into account periodic sensor updates, the differences in the periodic application updates, the aging in the cache and communication variance. By numerical analysis we study the number of deadline misses for two different scheduling policies with respect to different periods.
Edge computing aims to enable applications with stringent latency requirements, e.g., augmented reality, and tame the overwhelming data streams generated by IoT devices. A core principle of this paradigm is to bring the computation from a distant cloud closer to service consumers and data producers. Consequentially, the issue of edge computing facilities’ placement arises. We present a comprehensive analysis suggesting where to place general-purpose edge computing resources on an Internet-wide scale. We base our conclusions on extensive real-world network measurements. We perform extensive traceroute measurements from RIPE Atlas to datacenters in the US, resulting in a graph of 11K routers. We identify the affiliations of the routers to determine the network providers that can act as edge providers. We devise several edge placement strategies and show that they can improve cloud access latency by up to 30%.
We consider large scale Internet of Things applications requesting data from physical devices. We study the problem of timely dissemination of sensor data towards applications with freshness requirements by means of a cache. We aim to minimize direct access to the possibly battery powered physical devices, yet improving Age of Information as a data freshness metric. We propose an Age of Information-aware scheduling policy for the physical device to push sensor updates to the caches located in cloud data centers. Such policy groups application requests based on freshness thresholds, thereby reduces the number of requests and threshold misses, and accounts for delay variation. The policy is incrementally introduced as we study its behavior over ideal and more realistic communication links with delay variation. We numerically evaluate the proposed policy against a simple yet widely used periodic schedule. We show that our informed schedule outperforms the periodic schedule even under high delay variations.
We present work-in-progress developing a communication framework that addresses the communication challenges of the decentralized multihop wireless environment. The main contribution is the combination of a fully distributed, asynchronous power save mechanism with adaptation of the timing patterns defined by the power save mechanism to improve the energy and bandwidth efficiency of communication in multihop wireless networks. The possibility of leveraging this strategy to provide more complex forms of traffic management is explored.
The large number and wide diversity of IoT networks operating in unlicensed spectrum will create a complex and challenging interference environment. To avoid a 'tragedy of the commons', networks may need to more explicitly coordinate their use of the shared channel.
We describe a testbed for studying battery discharge behavior andthe lifetime of wireless devices under controlled temperature conditions and present preliminary measurement results.
We present some preliminary results on LoRaWAN and IEEE 802.15.4-SUN interference in urban environments. The results are based on a simple simulation that is parameterized using PHY layer measurements of controlled interference scenarios.
The current Internet today hosts several extensions for indirection like Mobile IP, NAT, proxies, route selection and various network overlays. At the same time, user-controlled indirection mechanisms foreseen in the Internet architecture (e.g., loose source routing) cannot be used to implement these extensions. This is a consequence of the Internet's indirection semantics not being rich enough at some places and too rich at others. In order to achieve a more uniform handling of indirection we propose SelNet, a network architecture that is based on a virtualized link layer with explicit indirection support. Indirection in this context refers to user-controlled steering of packet flows through the network. We discuss the architectural implications of such a scheme and report on implementation progress.
The current Internet today hosts several extensions for indirection like Mobile IP, NAT, proxies, route selection and various network overlays. At the same time, user-controlled indirection mechanisms foreseen in the Internet architecture (e.g., loose source routing) cannot be used to implement these extensions. This is a consequence of the Internet’s indirection semantics not being rich enough at some places and too rich at others. In order to achieve a more uniform handling of indirection we propose SelNet, a network architecture that is based on a virtualized link layer with explicit indirection support. Indirection in this context refers to user-controlled steering of packet flows through the network. We discuss the architectural implications of such a scheme and report on implementation progress.
This broschure was produced for the inauguration of WISENET December 7, 2007. It decribes the future impact of WISENET, application areas and the 10 partners. The three research areas "Node Integration & Energy", "Networkning & Security" and "Wireless Communication" is briefly described as well as the application projects in "Water Sensing" and "Transport".
We present a hardware platform for performing experimental studies of energy storage devices for low power wireless networks. It is based on a low-cost custom card that can apply fine-grain synthetic loads ? both charge and discharge ? to a set of batteries or capacitors and measure their response in detail. Loads can be defined from a "live" trace of a running wireless device, from a recorded trace, or programmatically via a script. This approach makes it practical to run well controlled, large scale, long running experiments and to obtain high precision and accuracy. We describe two proof-of-concept experiments using rechargeable Li coin cells and capacitors to demonstrate the capabilities of our platform.
The present invention relates to a method, call handling server (18), local wireless network (10) and computer program product for performing vertical handover of a wireless voice connection, which is a part of a voice connection set up between a portable communication (22) device and another device. The network comprises a number of access points (20) and the call handling server (18) for controlling voice connections to the portable device. The server comprises a control unit (24) that determines a handover situation for the wireless connection to the portable device based on a set of handover factors that comprise the position and movement of the portable communication device in an area of the local network and structural layout information of the area together with knowledge of where in this area there is insufficient coverage.
With a rapidly increasing number of devices sharing access to the 2.4 GHz ISM band, interference becomes a serious problem for 802.15.4-based, low-power sensor networks. Consequently, interference mitigation strategies are becoming commonplace. In this paper, we consider the step that precedes interference mitigation: interference detection. We have performed extensive measurements to characterize how different types of interferers affect individual 802.15.4 packets. From these measurements, we define a set of features which we use to train a neural network to classify the source of interference of a corrupted packet. Our approach is sufficiently lightweight for online use in a resource constrained sensor network. It does not require additional hardware, nor does it use active spectrum sensing or probing packets. Instead, all information about interferers is gathered from inspecting corrupted packets that are received during the sensor network’s regular operation. Even without considering a history of earlier packets, our approach reaches a mean classification accuracy of 79.8%, with per interferer accuracies of64.9% for WiFi, 82.6% for Bluetooth, 72.1% for microwave ovens, and 99.6% for packets that are corrupted due to insufficient signal strength.
Recent natural disasters (earthquakes, floods, etc.) have shown that people heavily use platforms like Twitter to communicate and organize in emergencies. However, the fixed infrastructure supporting such communications may be temporarily wiped out. In such situations, the phones’ capabilities of infrastructure-less communication can fill in: By propagating data opportunistically (from phone to phone), tweets can still be spread, yet at the cost of delays.
In this paper, we present Twimight and its network security extensions. Twimight is an open source Twitter client for Android phones featured with a “disaster mode”, which users enable upon losing connectivity. In the disaster mode, tweets are not sent to the Twitter server but stored on the phone, carried around as people move, and forwarded via Bluetooth when in proximity with other phones. However, switching from an online centralized application to a dis- tributed and delay-tolerant service relying on opportunistic communication requires rethinking the security architecture. We propose security extensions to offer comparable security in the disaster mode as in the normal mode to protect Twimight from basic attacks. We also propose a simple, yet efficient, anti-spam scheme to avoid users from being flooded with spam. Finally, we present a preliminary empirical performance evaluation of Twimight.
Recent events (earthquakes, floods, etc.) have shown that users heavily rely on online social networks (OSN) to communicate and organize during disasters and in their aftermath. In this paper, we discuss what features could be added to OSN apps for smart phones – for the example of Twitter – to make them even more useful for disaster situations. In particular, we consider cases where the fixed communication infrastructure is partially or totally wiped out and propose to equip regular Twitter apps with a disaster mode. The disaster mode relies on opportunistic communication and epidemic spreading of Tweets from phone to phone. Such “disaster-ready” applications would allow to resume (although limited) communication instantaneously and help distressed people to self-organize un- til regular communication networks are functioning again, or, temporary emergency communication infrastructure is installed.
We argue why we believe that Twitter with its simplicity and versatile features (e.g., retweet and hashtag) is a good platform to support a variety of different situations and present Twimight, our disaster ready Twitter application. In addition, we propose Twimight as a platform for disseminating sensor data providing information such as locations of drinkable water sources. Eventually, we propose to rely on interest matching to scale Twitter hashtag-based searches in an opportunistic environment. The combination of these features make our opportunistic Twitter the ideal emergency kit in situations of disasters. We discuss and define the main implementation and research challenges (both technical and non- technical).
A network node determines a voltage response characteristic of the battery in a low powered user equipment and controls the activity pattern of the user equipment to align its activity pattern with the voltage response characteristic of the battery to extend the useful life of the battery in the user equipment.
A network node is configured to monitor battery usage on behalf of the low power device and to predict the SOC and remaining life-time for the batteries in the low power devices. The network nodes are fully powered and have the computational capacity to use more complex and accurate models that are not feasible for implementation in resource-limited devices. The network node can calculate the SOC and remaining life-time for a low power device and may also change the transmission and reception patterns for the device to extend the life-time of the battery.