Biography | TeachingResearchPublications | Talks | Projects | Collaborations | Home

 


Secure AIoT

AI-native, holistic, end-to-end framework for future 6G networks


Summary

AI-native, holistic, end-to-end concept

The global traffic as well as the network infrastructure are growing dramatically, with an expected 333 exabytes per month by the end of 2022 that could reach 682 exabytes per month in 2025 at a compound annual growth rate of 27% [1]. At the same time, the IoT is experiencing an explosive growth with over 16 billion IoT devices by 2025 [2]. These fast evolutions will make network management extremely complex, expensive, and time-consuming. This is the reason why AI in telecommunications is currently at its all-time high as a solution to optimize and automate networks, keep them healthy and secure, while reducing operational costs.

Moreover, it is envisioned a world where virtually all devices, networks, and systems are becoming intelligent, from simple wearables that monitor our health up to systems that enable autonomous driving. Smart networks and services will intelligently interact with our environment, simplifying and enriching our daily lives. By the end of 2027, the global AI in telecommunication market is expected to reach nearly $15 billion. AI not only provides an ability for devices and systems to perceive, deduct, and act intuitively and intelligently, but also changes how we approach and solve technical challenges. 

Future 6G networks with their massive connectivity will elevate object capabilities to new levels as well as expand intelligence into new devices, deployments, and industries. Our goal is to make AI advancements inherently synergistic with future 6G networks in order to improve system performance and efficiency. With the proliferation of connected devices and the role of on-device intelligence becoming ever more important, the transformation of AI into fully distributed intelligence will be one of the keys to realize the full potential of future 6G networks.

The centralized architecture of networks will suffer from major bottlenecks as the number of connected objects will increase exponentially. This includes connectivity (interference), latency (distant servers), energy consumption (cost of distant communication), centralized data processing (data flooding) and centralized network management (number of objects). Fully distributing AI from cloud down to end-user devices can realize better system efficiency, enhanced privacy and security, improved performance, reduced latency, and new levels of personalization. The end-user devices must be designed to sense, learn, reason, interpret and act intelligently - all with optimal interaction with the cloud by sharing insights, but not raw data. With an AI-native, holistic, end to end approach, the system will further be able to support continuous improvements through self-learning, where both sides of the AI-native air interface - the network and device - can adapt to their surroundings dynamically and optimize operations based on what they experience. This is a fundamental paradigm shift in the way wireless systems can be improved, and we envision this AI-native design methodology to become part of the future 6G system. Distributed AI will drive the core and RAN (radio access network) with intelligent network operation to provide enhanced QoS and QoE, better efficiency, simplified deployment, and improved security while reducing energy consumption, CO2 emissions and operational costs. The underlying enabling capability of on-device AI is radio awareness, through environmental and contextual sensing that can reduce overheads and latency. Through radio awareness, the 6G system can support enhanced device experiences (e.g., more intelligent beam forming and power management), improve system performance (e.g., reduced interference and better spectrum utilization) and improve security (e.g., better detection and protection against malicious attacks).

Pushing the technology boundaries of smart networks into objects faces severe challenges. The first challenge is the deployment of fully distributed AI algorithms in the networks. All the devices and nodes with computation capabilities in a network will not only need to learn from local data, they will also need to communicate their learnings with each other and work together, quickly and efficiently, to make network management decisions in a collaborative and autonomous way. The second challenge is the limitations of the computing resources at the end-user nodes. Incorporating appropriate intelligence into end-user devices will require advanced tiny machine learning algorithms, an adequate frugal AI models as well as a computing paradigm that exceeds the current computing capabilities of smart phones and portables. The third challenge is to ensure privacy and security which is of critical importance since end-user devices, especially IoT devices, are susceptible to external attacks that can cause either leakage of private information or dangers to users. The fourth challenge is the scarcity of training data to implement machine learning models in a reliable and trustworthy manner.

Further, the massive adoption of AI tools may exacerbate the problem of energy consumption of the ICT infrastructure.  Therefore, it will be crucial to devise energy efficient architectures and computation algorithms to have energetically sustainable communication and computing paradigms for future mobile networks that adequately explore artificial intelligence technologies. Such unified and open communication and computing architecture with a massive adoption of AI across all layers of the network should enable seamless operations and service execution across a multiplicity of heterogeneous infrastructures, services and business, whilst providing secure and reliable scalability towards an unlimited number of application requirements. It offers a consistent/reliable programmable environment enabling “tailor made” implementation of various tenants’ requirements and a promising solution with many breakthrough potentials and perspectives. Hence, such network-cloud-sensing-computing convergence can provide a robust ground for a massively digitised economy and society that would be both sustainable and secure.


Project Breakdown

 

page59image45845760

 


Objectives

Global objective: This project aims at designing fully distributed intelligence for smart, secure and green future 6G networks. Thanks to distributed AI-native integration with holistic end-to-end framework, we will realize better system efficiency, improved performance, reduced latency, and enhanced privacy and security. The system will further be able to support continuous improvements through self-learning, where the network and device - can adapt to their surroundings dynamically and optimize operations based on what they experience.

Objective 1: Designing an intelligent hybrid mesh 6G network with great flexibility, high agility, self- adaptability, improved performance and increased efficiency

One of the most important advances in 5G networks is the softwarization of the network infrastructure, where flexibility, agility and scalability are the most important benefits. However, central softwarization may not be able to cope with the increasing complexity and heterogeneity of the network. Many functions, including communications, computing, content caching, and storage, must rely on distributed decision- making to avoid the overhead of centralised solution. The first specific objective of the project is to propose a distributed AI-Defined Network concept, combining with distributed intelligence to design a future 6G network that is smarter, more flexible, more agile, capable of learning and self-adapting according to the evolution of network demand. The distributed intelligence will be designed to provide better QoS and QoE, improved performance, increased spectrum efficiency and enhanced security and support dynamic spectrum sharing across multiple frequency bands and zero-touch end-to-end resource management with drastic OPEX reduction, while reducing overall energy consumption.

Objective 2: developing distributed edge intelligence with bandwidth-, energy- and memory-efficiency and low-power flexible HW accelerator

The second specific objective of this project is the development of distributed edge intelligence, which includes end-user devices as part of the computational intelligent platform for “small data” processing, analysing, and delivering, where efficient edge support will be needed for advanced analytics. The end- user devices will be designed to act intelligently with optimal interaction with the edge/cloud by sharing insights, but not raw data. Fully decentralized large-scale distributed training across many interconnected devices (or decentralized federated learning) and tiny AI models using effective model compression strategy at the end-user device level will be proposed to reduce network bandwidth and increase energy- and memory-efficiency. Ultra-low- power flexible hardware accelerator combining memory-based computing, processing-in-memory and near-memory computing will be designed to enable on-device intelligence.

Objective 3 : Enhanced dynamic end-to-end distributed security and privacy

The third objective of this project is the development of detection/prevention mechanisms against various threats. First, we will use various machine learning techniques to address the threats where malicious actors aim to undermine the security of the developed networks. Next, since the developed networks, as well as our proposed defences, heavily depend on AI, it is necessary to ensure that the AI part is protected against attacks on it. As the envisioned system uses data to build models, it is often necessary to protect the privacy of data. For that, we leverage on the usage of federated learning as discussed before but we also use differential privacy. Finally, we will leverage on the advances in the AI explainability to provide directions how to design even better defences and understand how to improve the robustness of our AI-powered defences.

Objective 4: Augmentation, frugality and openness of training data

It is the matter of the fact that originally collected data for the purpose of AI training, even after routine pre-processing, is not optimally ready for application. They represent the entire spectra of problems, which impede their efficient use.

 First, these include problems connected with the quality of the data, such as presence of missing values or their contamination with abnormal observations, both being inevitable and ubiquitous in real conditions. A further problem in this context is the heavy-tailedness of the data caused by the intrinsic data-generating process, i.e., the situation where (normally) exists non-negligible probability of large values; this last situation has already been observed when studying the behaviour of the Internet traffic that can exhibit spontaneous pattern. In the case of data- quality deterioration, data frugality means making most efficient use of the data. The problem is (at-least) three-fold.

(a) Presence of anomalies in the data disturbs employed (and trained) models and can heavily reduce their performance.

(b) Presence of missing values makes it impossible to run existing software, usually not able to treat absence of data in certain cells of the data-base. Observation-wise deletion cannot be an option, especially if small- portion of missing values is present in almost each (high-dimensions, e.g., time series) observation, and thus with most of data measurements being available can eliminate majority of the observations entirely.

(c) Heavy tails in the measured data cause estimation problems, which are normally not treatable with the neural-network based models.

Second, the quantity of the data can be insufficient itself. Clearly, solving the previously mentioned (first) problem shall allow use of the maximum of the in-data contained information; this can be often not enough though, which causes two needs. (a) Development of flexible data augmentation models is thus necessary, to bring the data amount on necessary scale maintaining its similarity to the original data-generating process. (b) Quality of both produced data and on this data-based estimators are of necessity.


Current Partners

 

For more information, please send an email to van-tam.nguyen@telecom-paris.fr

 


Télécom Paris is member of