Starting November 2018, I am on extended academic leave.
This website, as well as contact information, etc. are no longer up-to-date. Please check
https://nonsns.github.io for more information
Recent research has put forward the concept of Fog computing, a deported intelligence for IoT networks. Fog clusters are meant to complement current cloud deployments, providing compute and storage resources directly in the access network – which is particularly useful for low-latency applications. How- ever, Fog deployments are expected to be less elastic than cloud platforms, since elasticity in Cloud platforms comes from the scale of the data-centers. Thus, a Fog node dimensioned for the average traffic load of a given application will not be able to handle sudden bursts of traffic. In this paper, we explore such a use-case, where a Fog-based latency-sensitive application must offload some of its processing to the Cloud. We build an analytical queueing model for deriving the statistical response time of a Fog deployment under different request Load Balancing (LB) strategies, contrasting a naive, an ideal (LFU-LB, assuming a priori knowledge of the request popularity) and a practical (LRU-LB, based on online learning of the popularity with an LRU filter) scheme. Using our model, and confirming the results through simulation, we show that our LRU-LB proposal achieves close-to-ideal performance, with high savings on Cloud offload cost with respect to a request-oblivious strategy in the explored scenarios.
@inproceedings{DR:ITC-18,
author = {Enguehard, Marcel and Carofiglio, Giovanna and Rossi, Dario},
title = {A popularity-based approach for effective Cloud offload in Fog clusters},
booktitle = {30th International Teletraffic Congress (ITC30)},
month = sep,
location = {Vienna, Austria},
year = {2018},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18itc.pdf}
}
[BIGDAMA-18]
Putina, Andrian and Rossi, Dario and Bifet, Albert and Barth, Steven and Pletcher, Drew and Precup, Cristina and Nivaggioli, Patrice,
Telemetry-based stream-learning of BGP anomalies
ACM SIGCOMM Workshop on Big Data Analytics and Machine Learning for Data Communication Networks (Big-DAMA’18)
aug.
2018,
conference
Recent technology evolution allows network equipments to continuously stream a wealth of "telemetry" information, which pertains to multiple protocols and layers of the tack, at a very fine spatial-grain and high-frequency. Processing this deluge of telemetry data in real-time clearly offers new opportunities for network control and troubleshooting, but also poses serious challenges. We tackle this challenge by applying streaming machine-learning techniques to the continuous flow of control and data-plane telemetry data, with the purpose of real-time detection of BGP anomalies. In particular, we implement an anomaly detection engine that leverages DenStream, an unsupervised clustering technique, and apply it to features collected from a large-scale testbed comprising tens of routers traversed by 1 Terabit/s worth of real application traffic. In spirit with the recent trend toward reproducibility of research results, we make our code and datasets available as open source to the scientific community
@inproceedings{DR:BIGDAMA-18,
author = {Putina, Andrian and Rossi, Dario and Bifet, Albert and Barth, Steven and Pletcher, Drew and Precup, Cristina and Nivaggioli, Patrice},
title = {Telemetry-based stream-learning of BGP anomalies},
booktitle = {ACM SIGCOMM Workshop on Big Data Analytics and Machine Learning for Data Communication Networks (Big-DAMA'18)},
month = aug,
year = {2018},
location = {Budapest, Hungary},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18bigdama.pdf}
}
Page load time (PLT) is still the most common application Quality of Service (QoS) metric to estimate the Quality of Experience (QoE) of Web users. Yet, recent literature abounds with interesting proposals for alternative metrics (e.g., Above The Fold, SpeedIndex and variants) that aim at closely capturing how users perceive the Webpage rendering process. However, these novel metrics are typically computationally expensive, as they require to monitor and post-process videos of the rendering process, and have failed to be widely deployed. In this demo, we show our implementation of an open-source Chrome extension that implements a practical and lightweight method to measure the approximated Above-the-Fold (AATF) time, as well as others Web performance metrics. The idea is, instead of accurately monitoring the rendering output, to track the download time of the last visible object on screen (i.e., “above the fold”). Our plugin also has options to save detailed reports for later analysis, a functionality ideally suited for researchers wanting to gather data from Web experiments.
@inproceedings{DR:SIGCOMM-18a,
title = {A practical method for measuring Webabove-the-fold time},
author = {da Hora, Diego Neves and Rossi, Dario and Christophides, Vassilis and Teixeira, Renata},
booktitle = {ACM SIGCOMM, Demo Session},
address = {Budapest, Hungary},
month = aug,
year = {2018},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18sigcomm-a.pdf}
}
Software packet processing is an intriguing approach due to its tremendous flexibility and cost reduction compared with hardware solutions, which have long dominated software performance. However, the emergence of fast packet I/O frameworks challenges hardware supremacy, as software solutions based on commodity hardware manage to process packets at high speed (10-40 Gbps). Whereas novel packet processing applications based on these new frameworks proliferate, fine-grained traffic monitor at high speed has received comparatively less attention. In this demonstration, we showcase FlowMon-DPDK a novel software traffic monitor recently published at TMA 2018, based on the Intel DPDK I/O framework. Our monitor is capable of providing runtime statistics at both packet- and flow-levels at 10 Gbit/s using a minimal amount of CPU resources, with packet losses that are order of magnitude smaller than state-of-the-art software. A video showing the demonstration is available at \urlhttps://youtu.be/B8uaw9UgMm0. For further details, please refer to our TMA 2018 paper.
@inproceedings{DR:SIGCOMM-18b,
author = {Zhang, Tianzhu and Linguaglossa, Leonardo and Gallo, Massimo and Giaccone, Paolo and Rossi, Dario},
title = {FlowMon-DPDK: Parsimonious per-flow software monitoring at line rate},
booktitle = {ACM SIGCOMM, Demo Session},
address = {Budapest, Hungary},
month = aug,
year = {2018},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18sigcomm-b.pdf}
}
We demonstrate that fair dropping is an effective means to realize fair sharing of bandwidth and CPU in a software router. Analysis underpinning the effectiveness of the proposed approach is presented in an IFIP Networking 2018 paper [1]
@booktitle{DR:SIGCOMM-18c,
author = {Addanki, Vamsi and Linguaglossa, Leonardo and Roberts, James and Rossi, Dario},
title = {Fair dropping for multi-resource fairness in software routers},
booktitle = {ACM SIGCOMM, Demo Session},
address = {Budapest, Hungary},
month = aug,
year = {2018},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18sigcomm-c.pdf}
}
[HPSR-18]
Gong, YiXi and Roberts, James W. and Rossi, Dario,
Per-Flow Fairness in the Datacenter Network
IEEE International Conference on High Performance Switching and Routing (HPSR’18)
jun.
2018,
conference
Datacenter network (DCN) design has been actively researched for over a decade. Solutions proposed range from end-to-end transport protocol redesign to more intricate, monolithic and cross-layer architectures. Despite this intense activity, to date we remark the absence of DCN proposals based on simple fair scheduling strategies. In this paper, we evaluate the effectiveness of FQ-CoDel in the DCN environment. Our results show, (i) that average throughput is greater than that attained with DCN tailored protocols like DCTCP, and (ii) the completion time of short flows is close to that of state-of-art DCN proposals like pFabric. Good enough performance and striking simplicity make FQ-CoDel a serious contender in the DCN arena.
@inproceedings{DR:HPSR-18,
author = {Gong, YiXi and Roberts, James W. and Rossi, Dario},
title = {Per-Flow Fairness in the Datacenter Network},
booktitle = {IEEE International Conference on High Performance Switching and Routing (HPSR'18)},
month = jun,
year = {2018},
location = {Bucharest, Romania},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18hpsr.pdf}
}
propose and evaluate simple signals coming from in-network telemetry that are effective to enhance the quality of DASH streaming. Specifically, in-network caching is known to positively affect DASH streaming quality but at the same time negatively affect the controller stability, increasing the quality switch ratio. Our contributions are to first (i) consider the broad spectrum of interaction between the network and the application, and then (ii) to devise how to effectively exploit in a DASH controller a very simple signal (i.e., per-quality hit ratio) that can be exported by framework such as Server and Network Assisted DASH (SAND) at fairly low rate (i.e., a timescale of 10s of seconds). Our thorough experimental campaign confirms the soundness of the approach (that significantly ameliorate performance with respect to network-blind DASH), as well as its robustness (i.e., tuning is not critical) and practical appeal (i.e., due to its simplicity and compatibility with SAND).
@inproceedings{DR:NOSSDAV-18,
author = {Samain, Jacques and Carofiglio, Giovanna and Tortelli, Michele and Rossi, Dario},
title = {A simple yet effective network-assisted signal for enhanced DASH quality of experience},
booktitle = {28th ACM SIGMM Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV'18)},
month = jun,
year = {2018},
note = {bestpaperaward},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18nossdav.pdf}
}
In 2012, Google introduced the Speed Index (SI) metric to quantify the speed of the Web page visual completeness for the actually displayed above-the-fold (ATF) portion of a Web page. In Web browsing a page might appear to the user to be already fully rendered, even though further content may still be retrieved, resulting in the Page Load Time (PLT). This happens due to the browser progressively rendering all objects, part of which can also be located below the browser window’s current viewport. The SI metric (and variants) thereof have since established themselves as a de facto standard in Web page and browser testing. While SI is a step in the direction of including the user experience into Web metrics, the actual meaning of the metric and especially its relationship between Speed Index and Web QoE is however far from being clear. The contributions of this paper are thus to first develop an understanding of the SI based on a theoretical analysis and second, to analyze the interdependency between SI and MOS values from an existing public dataset. Specifically, our analysis is based on two well established models that map the user waiting time to a user ACR-rating of the QoE. The analysis show that ATF-based metrics are more appropriate than pure PLT as input to Web QoE models.
@inproceedings{DR:QOMEX-18,
author = {Hossfeld, Tobias and Metzger, Florian and Rossi, Dario},
title = {Speed Index: Relating the Industrial Standard for User Perceived Web Performance to Web QoE},
booktitle = {10th International Conference on Quality of Multimedia Experience (QoMEX 2018)},
month = jun,
year = {2018},
location = {Sardinia, Italy},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18qomex.pdf}
}
Testing experimental network devices requires deep performance analysis, which is usually performed with expensive, not flexible, hardware equipment. With the advent of high- speed packet I/O frameworks, general purpose equipments have narrowed the performance gap in respect of dedicated hardware and a variety of software-based solutions have emerged for handling traffic at very high speed. While the literature abounds with software traffic generators, existing monitoring solutions do not target worst-case scenarios (i.e., 64B packets at line rate) that are particularly relevant for stress-testing high-speed network functions, or occupy too many resources. In this paper we first analyze the design space for high-speed traffic monitoring that leads us to specific choices characterizing FlowMon-DPDK, a DPDK-based software traffic monitor that we make available as open source software. In a nutshell, FlowMon-DPDK provides tunable fine-grained statistics at both packet and flow levels. Experimental results demonstrate that our traffic monitor is able to provide per-flow statistics with 5-nines precision at high-speed (14.88 Mpps) using a exiguous amount of resources. Finally, we showcase FlowMon-DPDK usage by testing two open source prototypes for stateful flow-level end-host and in-network packet processing.
@inproceedings{DR:TMA-18,
author = {Zhang, Tianzhu and Linguaglossa, Leonardo and Gallo, Massimo and Giaccone, Paolo and Rossi, Dario},
title = {FlowMon-DPDK: Parsimonious per-flow software monitoring at line rate},
booktitle = {Network Traffic Measurement and Analysis Conference (TMA'18)},
month = jun,
year = {2018},
location = {Wien, Austria},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18tma.pdf}
}
[RIPE-76]
da Hora, Diego Neves and Christophides, Vassilis and Teixeira, Renata and Rossi, Dario,
Perceptual evaluation of web-browsing
Talk at the RIPE76, Measurement and Tools (MAT) Working Group
may.
2018,
conference
@inproceedings{DR:RIPE-76,
title = {Perceptual evaluation of web-browsing},
author = {da Hora, Diego Neves and Christophides, Vassilis and Teixeira, Renata and Rossi, Dario},
booktitle = {Talk at the RIPE76, Measurement and Tools (MAT) Working Group},
address = {Marseille, France},
month = may,
year = {2018}
}
The paper discusses resource sharing in a software router where both bandwidth and CPU may be bottlenecks. We propose a novel fair dropping algorithm to realize per-flow max-min fair sharing of these resources. The algorithm is compatible with features like batch I/O and batch processing that tend to make classical scheduling impractical. We describe an implementation using Vector Packet Processing, part of the Linux Foundation FD.io project. Preliminary experimental results prove the efficiency of the algorithm in controlling bandwidth and CPU sharing at high speed. Performance in dynamic traffic is evaluated using analysis and simulation, demonstrating that the proposed approach is both effective and scalable.
@inproceedings{DR:NETWORKING-18,
author = {Vamsi Addanki, Leonardo Linguaglossa, James Roberts and Rossi, Dario},
title = {Controlling software router resource sharing by fair packet dropping},
month = may,
location = {Zurich, Switzerland},
year = {2018},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18networking.pdf}
}
Recent technology evolution of network equipment allow to continuously stream a wealth of information, pertaining to multiple protocols and layers of the stack, at a very fine spatial-grain and at furthermore high-frequency. Processing this deluge of telemetry data in real-time clearly offers new opportunities for network control and troubleshooting, but also poses serious challenges. In this demonstration, we tackle this challenge by applying streaming machine-learning techniques to the continuous flow of control and data-plane telemetry data, with the purpose of real-time detection of BGP anomalies. In particular, we implement an anomaly detection engine that leverages DenStream, an unsupervised clustering technique, and apply it to telemetry features collected from a large-scale testbed comprising tens of routers traversed by 1 Terabit/s worth of real application traffic.
@inproceedings{DR:INFOCOM-18a,
author = {Putina, Andrian and Rossi, Dario and Bifet, Albert and Barth, Steven and Pletcher, Drew and Precup, Cristina and Nivaggioli, Patrice},
title = {Unsupervised real-time detection of BGP anomalies leveraging high-rate and fine-grained telemetry data},
booktitle = {IEEE INFOCOM, Demo Session},
month = apr,
year = {2018},
location = {Honolulu, Hawaii},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18infocom-a.pdf}
}
In the last decade, a number of frameworks started to appear that implement, directly in user-space with kernel-bypass mode, high-speed software data plane functionalities on commodity hardware. Vector Packet Processor (VPP) is one of such frameworks, representing an interesting point in the design space in that it offers: (i) in user-space networking, (ii) the flexibility of a modular router (Click and variants) with (iii) the benefits brought by techniques such as batch processing that have become commonplace in lower-level building blocks of high-speed networking stacks (such as netmap or DPDK). Similarly to Click, VPP lets users arrange functions as a processing graph, providing a full-blown stack of network functions. However, unlike Click where the whole tree is traversed for each packet, in VPP each traversed node processes all packets in the batch before moving to the next node. This design choice enables several code optimizations that greatly improve the achievable processing throughput: the purpose of this demonstration is to introduce the main VPP concepts and architecture, as well as experimentally showing the impact of design choices –and especially of batch packet processing–, on the achievable packet forwarding performance.
@inproceedings{DR:INFOCOM-18b,
author = {Barach, David and Linguaglossa, Leonardo and Marion, Damjan and Pfister, Pierre and Pontarelli, Salvatore and Rossi, Dario and Tollet, Jerome},
title = {Batched packet processing for high-speed software data plane functions},
booktitle = {IEEE INFOCOM, Demo Session},
month = apr,
year = {2018},
location = {Honolulu, Hawaii},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18infocom-b.pdf}
}
[PAM-18a]
Salutari, Flavia and Cicalese, Danilo and Rossi, Dario,
A closer look at IP-ID behavior in the Wild
International Conference on Passive and Active Network Measurement (PAM)
mar.
2018,
conference
Originally used to assist network-layer fragmentation and reassembly, the IP identification field (IP-ID) has been used and abused for a range of tasks, from counting hosts behind NAT, to detect router aliases and, lately, to assist detection of censorship in the Internet at large. These inferences have been possible since, in the past, the IP- ID was mostly implemented as a simple packet counter: however, this behavior has been discouraged for security reasons and other policies, such as random values, have been suggested. In this study, we propose a framework to classify the different IP-ID behaviors using active probing from a single host. Despite being only minimally intrusive, our technique is significantly accurate (99% true positive classification) robust against packet losses (up to 20%) and lightweight (few packets suffices to discriminate all IP-ID behaviors). We then apply our technique to an Internet-wide census, where we actively probe one alive target per each routable /24 subnet: we find that that the majority of hosts adopts a constant IP-IDs (39%) or local counter (34%), that the fraction of global counters (18%) significantly diminished, that a non marginal number of hosts have an odd behavior (7%) and that random IP-IDs are still an exception (2%).
@inproceedings{DR:PAM-18a,
title = {A closer look at IP-ID behavior in the Wild},
author = {Salutari, Flavia and Cicalese, Danilo and Rossi, Dario},
booktitle = {International Conference on Passive and Active Network Measurement (PAM)},
address = {Berlin, Germany},
year = {2018},
month = mar,
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18pam-a.pdf}
}
[PAM-18b]
da Hora, Diego Neves and Asrese, Alemnew Sheferaw and Christophides, Vassilis and Teixeira, Renata and Rossi, Dario,
Narrowing the gap between QoS metrics and Web QoE using Above-the-fold metrics
International Conference on Passive and Active Network Measurement (PAM), Receipient of the Best dataset award
mar.
2018,
conference Award
Page load time (PLT) is still the most common application Quality of Service (QoS) metric to estimate the Quality of Experience (QoE) of Web users. Yet, recent literature abounds with proposals for alternative metrics (e.g., Above The Fold, SpeedIndex and their variants) that aim at better estimating user QoE. The main purpose of this work is thus to thoroughly investigate a mapping between established and recently proposed objective metrics and user QoE. We obtain ground truth QoE via user experiments where we collect and analyze 3,400 Web accesses annotated with QoS metrics and explicit user ratings in a scale of 1 to 5, which we make available to the community. In particular, we contrast domain expert models (such as ITU-T and IQX) fed with a single QoS metric, to models trained using our ground-truth dataset over multiple QoS metrics as features. Results of our experiments show that, albeit very simple, expert models have a comparable accuracy to machine learning approaches. Furthermore, the model accuracy improves considerably when building per-page QoE models, which may raise scalability concerns as we discuss.
@inproceedings{DR:PAM-18b,
title = {Narrowing the gap between QoS metrics and Web QoE using Above-the-fold metrics},
author = {da Hora, Diego Neves and Asrese, Alemnew Sheferaw and Christophides, Vassilis and Teixeira, Renata and Rossi, Dario},
booktitle = {International Conference on Passive and Active Network Measurement (PAM), Receipient of the Best dataset award},
address = {Berlin, Germany},
month = mar,
year = {2018},
note = {bestpaperaward},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18pam-b.pdf}
}
In the Internet, Autonomous Systems continuously exchange routing information via the BGP protocol: the large number of networks involved and the verbosity of BGP result in a huge stream of updates. Making sense of all those messages remains a challenge today. In this paper, we leverage the notion of "primary path" (i.e., the most used inter-domain path of a BGP router toward a destination prefix for a given time period), reinterpreting updates by grouping them in terms of primary paths unavailability periods, and illustrate how BGP dynamics analysis would benefit from working with primary paths. Our contributions are as follows. First, through measurements, we validate the existence of primary paths: by analyzing BGP updates announced at the LINX RIS route collector spanning a three months period, we show that primary paths are consistently in use during the observation period. Second, we quantify the benefits of primary paths for BGP dynamics analysis on two use cases : Internet tomography and anomaly detection. For the latter, using three months of anomalous BGP events documented by BGPmon as reference, we show that primary paths could be used for detecting such events (hijacks and outages), testifying of the increased semantic they provide.
@inproceedings{DR:PAM-18c,
title = {{Leveraging Inter-domain Stability for BGP Dynamics Analysis}},
author = {Green, Thomas and Lambert, Anthony and Pelsser, Cristel and Rossi, Dario},
booktitle = {International Conference on Passive and Active Network Measurement (PAM)},
address = {Berlin, Germany},
year = {2018},
month = mar,
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18pam-c.pdf}
}
IP anycast is a commonly used technique to share the load of a variety of global services. For more than one year, leveraging a lightweight technique for IP anycast detection, enumeration and geolocation, we perform regular IP monthly censuses. This paper provides a brief longitudinal study of the anycast ecosystem, and we additionally make all our datasets (raw measurements from PlanetLab and RIPE Atlas), results (monthly geolocated anycast replicas for all IP/24) and code available to the community.
@article{DR:CCR-18,
title = {A longitudinal study of IP Anycast},
author = {Cicalese, Danilo and Rossi, Dario},
journal = {ACM Computer Communication Review},
volume = {1},
year = {2018},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18ccr.pdf}
}
In-network caching is an appealing solution to cope with the increasing bandwidth demand of video, audio and data transfer over the Internet. Nonetheless, in order to protect consumer privacy and their own business, Content Providers (CPs) increasingly deliver encrypted content, thereby preventing Internet Service Providers (ISPs) from employing traditional caching strategies, which require the knowledge of the objects being transmitted. To overcome this emerging tussle between security and effi- ciency, in this paper we propose an architecture in which the ISP partitions the cache space into slices, assigns each slice to a different CP, and lets the CPs remotely manage their slices. This architecture enables transparent caching of encrypted content, and can be deployed in the very edge of the ISP’s network (i.e., base stations, femtocells), while allowing CPs to maintain exclusive control over their content. We propose an algorithm, called SDCP, for partitioning the cache storage into slices so as to maximize the bandwidth savings provided by the cache. A distinctive feature of our algorithm is that ISPs only need to measure the aggregated miss rates of each CP, but they need not know of the individual objects that are requested. We prove that the SDCP algorithm converges to a partitioning that is close to the optimal, and we bound its optimality gap. We use simulations to evaluate SDCP’s convergence rate under stationary and non-stationary content popularity. Finally, we show that SDCP significantly outperforms traditional reactive caching techniques, considering both CPs with perfect and with imperfect knowledge of their content popularity.
@article{DR:TON-18,
title = {Caching Encrypted Content via Stochastic Cache Partitioning},
author = {Araldo, Andrea and Dan, Gyorgy and Rossi, Dario},
year = {2018},
volume = {26},
issue = {1},
doi = {10.1109/TNET.2018.2793892},
journal = {IEEE/ACM Transactions on Networking},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18ton.pdf}
}
In this paper we propose a methodology for the study of general cache networks, which is intrinsically scalable and amenable to parallel execution. We contrast two techniques: one that slices the network, and another that slices the content catalog. In the former, each core simulates requests for the whole catalog on a subgraph of the original topology, whereas in the latter each core simulates requests for a portion of the original catalog on a replica of the whole network. Interestingly, we find out that when the number of cores increases (and so the split ratio of the network topology), the overhead of message passing required to keeping consistency among nodes actually offsets any benefit from the parallelization: this is strictly due to the correlation among neighboring caches, meaning that requests arriving at one cache allocated on one core may depend on the status of one or more caches allocated on different cores. Even more interestingly, we find out that the newly proposed catalog slicing, on the contrary, achieves an ideal speedup in the number of cores. Overall, our system, which we make available as open source software, enables performance assessment of large-scale general cache networks, i.e., comprising hundreds of nodes, trillions contents, and complex routing and caching algorithms, in minutes of CPU time and with exiguous amounts of memory.
@article{DR:JSAC-18,
title = {Parallel Simulation of Very Large-Scale General Cache Networks},
author = {Tortelli, Michele and Rossi, Dario and Leonardi, Emilio},
year = {2018},
journal = {IEEE Journal on Selected Areas in Communication (JSAC)},
volume = {to appear},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18jsac.pdf}
}
Recent research has identified Information-Centric Networking (ICN) as a good fit for Internet of Things (IoT) deployments. However, most studies have focused on ICN as an application enabler, disregarding the behaviour from a network viewpoint. In this paper, we address this by studying the most important properties of an ICN-IoT deployment and contrast the operational costs between geographic-based forwarding and name-based forwarding schemes. We aim to understand if, and under which IoT deployment characteristics, geographic forwarding constitutes an advantage over name-based schemes, in terms of feasibility (i.e., memory footprint and computational capability of the devices) and performance (which we analyze as the overall energy cost of operating an ICN-IoT network under either forwarding paradigm). To achieve this goal, we employ a mixture of (i) modelling, (ii) simulative and (iii) experimental methodologies, which are useful to respectively (i) state the problem in a principled way, (ii) gather information about topological properties that are instrumental to the model and (iii) gather physical properties of the devices to feed the model with realistic data. In a nutshell, our results show that geographic forwarding (i) halves the memory footprint on our reference deployments and (ii) yields significant energy savings, especially for dynamic topologies.
@article{DR:TGCN-18,
author = {Enguehard, Marcel and Droms, Ralph and Rossi, Dario},
journal = {IEEE Transactions on Green Communications and Networking},
title = {On the cost of geographic forwarding for information-centric things},
year = {2018},
volume = {to appear},
number = {},
pages = {1-1},
keywords = {Cryptography;Data models;Architecture;Protocols;Internet of Things;Topology},
doi = {10.1109/TGCN.2018.2867267},
howpublished = {https://perso.telecom-paristech.fr/drossi/paper/rossi18tgcn.pdf},
issn = {2473-2400},
month = {}
}