publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2026
- OpenreviewLUSB: Formalizing and Benchmarking Unlearning Attacks and Defenses against Large Language ModelsChenxu Zhao, Wei Qian, Aobo Chen, Jingquan Wang, and 2 more authors2026
In recent years, large language models (LLMs) have achieved remarkable advancements. However, LLMs can inadvertently memorize sensitive or copyrighted content, raising privacy and legal concerns. Due to the high cost of retraining from scratch, recent research has introduced a series of promising machine unlearning techniques, namely LLM unlearning, to selectively remove specific content from LLMs. Yet, as a new paradigm, LLM unlearning may introduce critical security vulnerabilities by exposing additional interaction surfaces that adversaries can exploit, leading to emerging security threats against LLMs. Existing literature lacks a systematic understanding and comprehensive evaluation of unlearning attacks and their defenses in the context of LLMs. To bridge this gap, we introduce Language Unlearning Security Benchmark (LUSB), the first comprehensive framework designed to formalize, evaluate, and benchmark unlearning attacks and defenses against LLMs. Based on LUSB, we benchmark 16 different types of unlearning attack/defense methods across 13 LLM architectures, 9 LLM unlearning methods, and 12 task datasets. Our benchmark results reveal that unlearning attacks significantly undermine the security performance of LLMs, even in the presence of traditional LLM security defenses. Notably, unlearning attacks can not only amplify adversarial vulnerabilities of LLMs (e.g., increased susceptibility to jailbreak attacks) but also be exploited to gradually activate traditional poisoning or backdoor behaviors in LLMs. Further, our results underscore the limited effectiveness of existing defense strategies, emphasizing the urgent need for more advanced approaches to LLM unlearning security. We provide our benchmark in the supplementary material to facilitate further research in this area.
@misc{zhao2026lusb, title = {{LUSB}: Formalizing and Benchmarking Unlearning Attacks and Defenses against Large Language Models}, author = {Zhao, Chenxu and Qian, Wei and Chen, Aobo and Wang, Jingquan and Yang, Carl and Huai, Mengdi}, year = {2026}, url = {https://openreview.net/forum?id=lk3j87oquF}, }
2025
- AAAITowards Benchmarking Privacy Vulnerabilities in Selective Forgetting with Large Language ModelsWei Qian, Chenxu Zhao, Yangyi Li, and Mengdi HuaiarXiv preprint arXiv:2512.18035, 2025
The rapid advancements in artificial intelligence (AI) have primarily focused on the process of learning from data to acquire knowledgeable learning systems. As these systems are increasingly deployed in critical areas, ensuring their privacy and alignment with human values is paramount. Recently, selective forgetting (also known as machine unlearning) has shown promise for privacy and data removal tasks, and has emerged as a transformative paradigm shift in the field of AI. It refers to the ability of a model to selectively erase the influence of previously seen data, which is especially important for compliance with modern data protection regulations and for aligning models with human values. Despite its promise, selective forgetting raises significant privacy concerns, especially when the data involved come from sensitive domains. While new unlearning-induced privacy attacks are continuously proposed, each is shown to outperform its predecessors using different experimental settings, which can lead to overly optimistic and potentially unfair assessments that may disproportionately favor one particular attack over the others. In this work, we present the first comprehensive benchmark for evaluating privacy vulnerabilities in selective forgetting. We extensively investigate privacy vulnerabilities of machine unlearning techniques and benchmark privacy leakage across a wide range of victim data, state-of-the-art unlearning privacy attacks, unlearning methods, and model architectures. We systematically evaluate and identify critical factors related to unlearning-induced privacy leakage. With our novel insights, we aim to provide a standardized tool for practitioners seeking to deploy customized unlearning applications with faithful privacy assessments.
@article{qian2025towards, title = {Towards Benchmarking Privacy Vulnerabilities in Selective Forgetting with Large Language Models}, author = {Qian, Wei and Zhao, Chenxu and Li, Yangyi and Huai, Mengdi}, journal = {arXiv preprint arXiv:2512.18035}, year = {2025}, } - CIKMTowards Unveiling Predictive Uncertainty Vulnerabilities in the Context of the Right to Be ForgottenWei Qian, Chenxu Zhao, Yangyi Li, Wenqian Ye, and 1 more authorIn Proceedings of the 34th ACM International Conference on Information and Knowledge Management, 2025
Currently, various uncertainty quantification methods have been proposed to provide certainty and probability estimates for deep learning models’ label predictions. Meanwhile, with the growing demand for the right to be forgotten, machine unlearning has been extensively studied as a means to remove the impact of requested sensitive data from a pre-trained model without retraining the model from scratch. However, the vulnerabilities of such generated predictive uncertainties with regard to dedicated malicious unlearning attacks remain unexplored. To bridge this gap, for the first time, we propose a new class of malicious unlearning attacks against predictive uncertainties, where the adversary aims to cause the desired manipulations of specific predictive uncertainty results. We also design novel optimization frameworks for our attacks and conduct extensive experiments, including black-box scenarios. Notably, our extensive experiments show that our attacks are more effective in manipulating predictive uncertainties than traditional attacks that focus on label misclassifications, and existing defenses against conventional attacks are ineffective against our attacks.
@inproceedings{qian2025towardt, title = {Towards Unveiling Predictive Uncertainty Vulnerabilities in the Context of the Right to Be Forgotten}, author = {Qian, Wei and Zhao, Chenxu and Li, Yangyi and Ye, Wenqian and Huai, Mengdi}, booktitle = {Proceedings of the 34th ACM International Conference on Information and Knowledge Management}, pages = {5130--5135}, year = {2025}, } - ICCVMembership Inference Attacks with False Discovery Rate ControlChenxu Zhao, Wei Qian, Aobo Chen, and Mengdi HuaiIn Proceedings of the IEEE/CVF International Conference on Computer Vision, 2025
Recent studies have shown that deep learning models are vulnerable to membership inference attacks (MIAs), which aim to infer whether a data record was used to train a target model or not. To analyze and study these vulnerabilities, various MIA methods have been proposed. Despite the significance and popularity of MIAs, existing works on MIAs are limited in providing guarantees on the false discovery rate (FDR), which refers to the expected proportion of false discoveries among the identified positive discoveries. However, it is very challenging to ensure the false discovery rate guarantees, because the underlying distribution is usually unknown, and the estimated non-member probabilities often exhibit interdependence. To tackle the above challenges, in this paper, we design a novel membership inference attack method, which can provide the guarantees on the false discovery rate. Additionally, we show that our method can also provide the marginal probability guarantee on labeling true non-member data as member data. Notably, our method can work as a wrapper that can be seamlessly integrated with existing MIA methods in a post-hoc manner, while also providing the FDR control. We perform the theoretical analysis for our method. Extensive experiments in various settings (e.g., the black-box setting and the lifelong learning setting) are also conducted to verify the desirable performance of our method. The source code is available in the supplementary material.
@inproceedings{zhao2025membership, title = {Membership Inference Attacks with False Discovery Rate Control}, author = {Zhao, Chenxu and Qian, Wei and Chen, Aobo and Huai, Mengdi}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages = {1216--1227}, year = {2025}, } - AAAIA Survey of Security and Privacy Issues of Machine UnlearningAobo Chen, Yangyi Li, Chenxu Zhao, and Mengdi Huai2025
Machine unlearning is a cutting-edge technology that embodies the privacy legal principle of the right to be forgotten within the realm of machine learning (ML). It aims to remove specific data or knowledge from trained models without retraining from scratch and has gained significant attention in the field of artificial intelligence in recent years. However, the development of machine unlearning research is associated with inherent vulnerabilities and threats, posing significant challenges for researchers and practitioners. In this article, we provide the first comprehensive survey of security and privacy issues associated with machine unlearning by providing a systematic classification across different levels and criteria. Specifically, we begin by investigating unlearning-based security attacks, where adversaries exploit vulnerabilities in the unlearning process to compromise the security of machine learning (ML) models. We then conduct a thorough examination of privacy risks associated with the adoption of machine unlearning. Additionally, we explore existing countermeasures and mitigation strategies designed to protect models from malicious unlearning-based attacks targeting both security and privacy. Further, we provide a detailed comparison between machine unlearning-based security and privacy attacks and traditional malicious attacks. Finally, we discuss promising future research directions for security and privacy issues posed by machine unlearning, offering insights into potential solutions and advancements in this evolving field.
@misc{chen2025survey, title = {A Survey of Security and Privacy Issues of Machine Unlearning}, author = {Chen, Aobo and Li, Yangyi and Zhao, Chenxu and Huai, Mengdi}, year = {2025}, publisher = {Wiley Online Library}, }
2024
- ICMLData Poisoning Attacks against Conformal PredictionYangyi Li, Aobo Chen, Wei Qian, Chenxu Zhao, and 2 more authorsIn Proceedings of the 41st International Conference on Machine Learning, 21–27 jul 2024
The efficient and theoretically sound uncertainty quantification is crucial for building trust in deep learning models. This has spurred a growing interest in conformal prediction (CP), a powerful technique that provides a model-agnostic and distribution-free method for obtaining conformal prediction sets with theoretical guarantees. However, the vulnerabilities of such CP methods with regard to dedicated data poisoning attacks have not been studied previously. To bridge this gap, for the first time, we in this paper propose a new class of black-box data poisoning attacks against CP, where the adversary aims to cause the desired manipulations of some specific examples’ prediction uncertainty results (instead of misclassifications). Additionally, we design novel optimization frameworks for our proposed attacks. Further, we conduct extensive experiments to validate the effectiveness of our attacks on various settings (e.g., the full and split CP settings). Notably, our extensive experiments show that our attacks are more effective in manipulating uncertainty results than traditional poisoning attacks that aim at inducing misclassifications, and existing defenses against conventional attacks are ineffective against our proposed attacks.
@inproceedings{pmlr-v235-li24l, title = {Data Poisoning Attacks against Conformal Prediction}, author = {Li, Yangyi and Chen, Aobo and Qian, Wei and Zhao, Chenxu and Lidder, Divya and Huai, Mengdi}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {27563--27574}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, url = {https://proceedings.mlr.press/v235/li24l.html}, } - ICMLRethinking Adversarial Robustness in the Context of the Right to Be ForgottenChenxu Zhao, Wei Qian, Yangyi Li, Aobo Chen, and 1 more authorIn Proceedings of the 41st International Conference on Machine Learning, 21–27 jul 2024
The past few years have seen an intense research interest in the practical needs of the "right to be forgotten", which has motivated researchers to develop machine unlearning methods to unlearn a fraction of training data and its lineage. While existing machine unlearning methods prioritize the protection of individuals’ private data, they overlook investigating the unlearned models’ susceptibility to adversarial attacks and security breaches. In this work, we uncover a novel security vulnerability of machine unlearning based on the insight that adversarial vulnerabilities can be bolstered, especially for adversarially robust models. To exploit this observed vulnerability, we propose a novel attack called Adversarial Unlearning Attack (AdvUA), which aims to generate a small fraction of malicious unlearning requests during the unlearning process. AdvUA causes a significant reduction of adversarial robustness in the unlearned model compared to the original model, providing an entirely new capability for adversaries that is infeasible in conventional machine learning pipelines. Notably, we also show that AdvUA can effectively enhance model stealing attacks by extracting additional decision boundary information, further emphasizing the breadth and significance of our research. We also conduct both theoretical analysis and computational complexity of AdvUA. Extensive numerical studies are performed to demonstrate the effectiveness and efficiency of the proposed attack.
@inproceedings{pmlr-v235-zhao24k, title = {Rethinking Adversarial Robustness in the Context of the Right to Be Forgotten}, author = {Zhao, Chenxu and Qian, Wei and Li, Yangyi and Chen, Aobo and Huai, Mengdi}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {60927--60939}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, url = {https://proceedings.mlr.press/v235/zhao24k.html}, } - ICMLBridging Model Heterogeneity in Federated Learning via Uncertainty-based Asymmetrical Reciprocity LearningJiaqi Wang, Chenxu Zhao, Lingjuan Lyu, Quanzeng You, and 2 more authorsIn Proceedings of the 41st International Conference on Machine Learning, 21–27 jul 2024
This paper presents FedType, a simple yet pioneering framework designed to fill research gaps in heterogeneous model aggregation within federated learning (FL). FedType introduces small identical proxy models for clients, serving as agents for information exchange, ensuring model security, and achieving efficient communication simultaneously. To transfer knowledge between large private and small proxy models on clients, we propose a novel uncertainty-based asymmetrical reciprocity learning method, eliminating the need for any public data. Comprehensive experiments conducted on benchmark datasets demonstrate the efficacy and generalization ability of FedType across diverse settings. Our approach redefines federated learning paradigms by bridging model heterogeneity, eliminating reliance on public data, prioritizing client privacy, and reducing communication costs (The codes are available in the supplementation materials).
@inproceedings{pmlr-v235-wang24cs, title = {Bridging Model Heterogeneity in Federated Learning via Uncertainty-based Asymmetrical Reciprocity Learning}, author = {Wang, Jiaqi and Zhao, Chenxu and Lyu, Lingjuan and You, Quanzeng and Huai, Mengdi and Ma, Fenglong}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {52290--52308}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, url = {https://proceedings.mlr.press/v235/wang24cs.html}, } - ArxivExploring Fairness in Educational Data Mining in the Context of the Right to Be ForgottenWei Qian, Aobo Chen, Chenxu Zhao, Yangyi Li, and 1 more authorarXiv preprint arXiv:2405.16798, 2024
In education data mining (EDM) communities, machine learning has achieved remarkable success in discovering patterns and structures to tackle educational challenges. Notably, fairness and algorithmic bias have gained attention in learning analytics of EDM. With the increasing demand for the right to be forgotten, there is a growing need for machine learning models to forget sensitive data and its impact, particularly within the realm of EDM. The paradigm of selective forgetting, also known as machine unlearning, has been extensively studied to address this need by eliminating the influence of specific data from a pre-trained model without complete retraining. However, existing research assumes that interactive data removal operations are conducted in secure and reliable environments, neglecting potential malicious unlearning requests to undermine the fairness of machine learning systems. In this paper, we introduce a novel class of selective forgetting attacks designed to compromise the fairness of learning models while maintaining their predictive accuracy, thereby preventing the model owner from detecting the degradation in model performance. Additionally, we propose an innovative optimization framework for selective forgetting attacks, capable of generating malicious unlearning requests across various attack scenarios. We validate the effectiveness of our proposed selective forgetting attacks on fairness through extensive experiments using diverse EDM datasets.
@article{qian2024exploring, title = {Exploring Fairness in Educational Data Mining in the Context of the Right to Be Forgotten}, author = {Qian, Wei and Chen, Aobo and Zhao, Chenxu and Li, Yangyi and Huai, Mengdi}, journal = {arXiv preprint arXiv:2405.16798}, year = {2024}, } - AAAIAutomated Natural Language Explanation of Deep Visual Neurons with Large Models (student abstract)Chenxu Zhao, Wei Qian, Yucheng Shi, Mengdi Huai, and 1 more authorIn Proceedings of the AAAI Conference on Artificial Intelligence, 2024
Interpreting deep neural networks through examining neurons offers distinct advantages when it comes to exploring the inner workings of Deep Neural Networks. Previous research has indicated that specific neurons within deep vision networks possess semantic meaning and play pivotal roles in model performance. Nonetheless, the current methods for generating neuron semantics heavily rely on human intervention, which hampers their scalability and applicability. To address this limitation, this paper proposes a novel post-hoc framework for generating semantic explanations of neurons with large foundation models, without requiring human intervention or prior knowledge. Experiments are conducted with both qualitative and quantitative analysis to verify the effectiveness of our proposed approach.
@inproceedings{zhao2024automated, title = {Automated Natural Language Explanation of Deep Visual Neurons with Large Models (student abstract)}, author = {Zhao, Chenxu and Qian, Wei and Shi, Yucheng and Huai, Mengdi and Liu, Ninghao}, booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence}, volume = {38}, number = {21}, pages = {23712--23713}, year = {2024}, } - AAAITowards Modeling Uncertainties of Self-explaining Neural Networks via Conformal PredictionWei Qian, Chenxu Zhao, Yangyi Li, Fenglong Ma, and 2 more authorsIn Proceedings of the AAAI Conference on Artificial Intelligence, 2024
Despite the recent progress in deep neural networks (DNNs), it remains challenging to explain the predictions made by DNNs. Existing explanation methods for DNNs mainly focus on post-hoc explanations where another explanatory model is employed to provide explanations. The fact that post-hoc methods can fail to reveal the actual original reasoning process of DNNs raises the need to build DNNs with built-in interpretability. Motivated by this, many self-explaining neural networks have been proposed to generate not only accurate predictions but also clear and intuitive insights into why a particular decision was made. However, existing self-explaining networks are limited in providing distribution-free uncertainty quantification for the two simultaneously generated prediction outcomes (i.e., a sample’s final prediction and its corresponding explanations for interpreting that prediction). Importantly, they also fail to establish a connection between the confidence values assigned to the generated explanations in the interpretation layer and those allocated to the final predictions in the ultimate prediction layer. To tackle the aforementioned challenges, in this paper, we design a novel uncertainty modeling framework for self-explaining networks, which not only demonstrates strong distribution-free uncertainty modeling performance for the generated explanations in the interpretation layer but also excels in producing efficient and effective prediction sets for the final predictions based on the informative high-level basis explanations. We perform the theoretical analysis for the proposed framework. Extensive experimental evaluation demonstrates the effectiveness of the proposed uncertainty framework.
@inproceedings{qian2024towards, title = {Towards Modeling Uncertainties of Self-explaining Neural Networks via Conformal Prediction}, author = {Qian, Wei and Zhao, Chenxu and Li, Yangyi and Ma, Fenglong and Zhang, Chao and Huai, Mengdi}, booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence}, volume = {38}, number = {13}, pages = {14651--14659}, year = {2024}, }
2023
- NeurIPSStatic and Sequential Malicious Attacks in the Context of Selective ForgettingChenxu Zhao, Wei Qian, Rex Ying, and Mengdi HuaiAdvances in Neural Information Processing Systems, 2023
With the growing demand for the right to be forgotten, there is an increasing need for machine learning models to forget sensitive data and its impact. To address this, the paradigm of selective forgetting (a.k.a machine unlearning) has been extensively studied, which aims to remove the impact of requested data from a well-trained model without retraining from scratch. Despite its significant success, limited attention has been given to the security vulnerabilities of the unlearning system concerning malicious data update requests. Motivated by this, in this paper, we explore the possibility and feasibility of malicious data update requests during the unlearning process. Specifically, we first propose a new class of malicious selective forgetting attacks, which involves a static scenario where all the malicious data update requests are provided by the adversary at once. Additionally, considering the sequential setting where the data update requests arrive sequentially, we also design a novel framework for sequential forgetting attacks, which is formulated as a stochastic optimal control problem. We also propose novel optimization algorithms that can find the effective malicious data update requests. We perform theoretical analyses for the proposed selective forgetting attacks, and extensive experimental results validate the effectiveness of our proposed selective forgetting attacks.
@article{zhao2023static, title = {Static and Sequential Malicious Attacks in the Context of Selective Forgetting}, author = {Zhao, Chenxu and Qian, Wei and Ying, Rex and Huai, Mengdi}, journal = {Advances in Neural Information Processing Systems}, volume = {36}, pages = {74966--74979}, year = {2023}, } - KDDTowards Understanding and Enhancing Robustness of Deep Learning Models against Malicious Unlearning AttacksWei Qian, Chenxu Zhao, Wei Le, Meiyi Ma, and 1 more authorIn Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023
Given the availability of abundant data, deep learning models have been advanced and become ubiquitous in the past decade. In practice, due to many different reasons (e.g., privacy, usability, and fidelity), individuals also want the trained deep models to forget some specific data. Motivated by this, machine unlearning (also known as selective data forgetting) has been intensively studied, which aims at removing the influence that any particular training sample had on the trained model during the unlearning process. However, people usually employ machine unlearning methods as trusted basic tools and rarely have any doubt about their reliability. In fact, the increasingly critical role of machine unlearning makes deep learning models susceptible to the risk of being maliciously attacked. To well understand the performance of deep learning models in malicious environments, we believe that it is critical to study the robustness of deep learning models to malicious unlearning attacks, which happen during the unlearning process. To bridge this gap, in this paper, we first demonstrate that malicious unlearning attacks pose immense threats to the security of deep learning systems. Specifically, we present a broad class of malicious unlearning attacks wherein maliciously crafted unlearning requests trigger deep learning models to misbehave on target samples in a highly controllable and predictable manner. In addition, to improve the robustness of deep learning models, we also present a general defense mechanism, which aims to identify and unlearn effective malicious unlearning requests based on their gradient influence on the unlearned models. Further, theoretical analyses are conducted to analyze the proposed methods. Extensive experiments on real-world datasets validate the vulnerabilities of deep learning models to malicious unlearning attacks and the effectiveness of the introduced defense mechanism.
@inproceedings{qian2023towards, title = {Towards Understanding and Enhancing Robustness of Deep Learning Models against Malicious Unlearning Attacks}, author = {Qian, Wei and Zhao, Chenxu and Le, Wei and Ma, Meiyi and Huai, Mengdi}, booktitle = {Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining}, pages = {1932--1942}, year = {2023}, }
2022
- BIBMPatient Similarity Learning with Selective ForgettingWei Qian, Chenxu Zhao, Huajie Shao, Minghan Chen, and 2 more authorsIn 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2022
Patient similarity learning aims to use patient information such as electronic medical records and genetic data as input to calculate the pairwise similarity between patients, and it is becoming increasingly important in healthcare applications. However, in many cases, patient similarity learning models also need to forget some patient data. From the perspective of privacy, patients desire a tool to erase the impacts of their sensitive data from the trained patient similarity models. From the perspective of utility, if a patient similarity model’s utility is damaged by some bad patient data, the patient similarity model needs to forget such patient data to regain utility. Although some researchers have studied the problem of machine unlearning, existing methods cannot be directly applied to patient similarity learning as they fail to consider the comparative relationships among patients. In addition, they also fail to identify the optimal conditions of the local objective functions. In this paper, we fill in this gap by studying the unlearning problem in patient similarity learning. To unlearn the knowledge of a specific patient, we propose a novel erasable patient similarity learning framework, which enjoys the provable data removal guarantee and achieves high unlearning efficiency while keeping high model utility in patient similarity learning. We also conduct extensive experiments on real-world patient disease datasets to verify the desired properties of the proposed erasable framework.
@inproceedings{qian2022patient, title = {Patient Similarity Learning with Selective Forgetting}, author = {Qian, Wei and Zhao, Chenxu and Shao, Huajie and Chen, Minghan and Wang, Fei and Huai, Mengdi}, booktitle = {2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)}, pages = {529--534}, year = {2022}, organization = {IEEE}, } - MLSPAdaptive Gaussian Process Spectral Kernel Learning for 5G Wireless Traffic PredictionXinyi Zhang, Chenxu Zhao, Yun Yang, Zhidi Lin, and 2 more authorsIn 2022 IEEE 32nd International Workshop on Machine Learning for Signal Processing (MLSP), 2022
Wireless traffic prediction plays a crucial role in network management. However, achieving an accurate prediction for the fifth-generation (5G) network is intensely difficult because the time series data collected from 5G multi-antenna system shows rather complex patterns. To accurately capture intricate wireless traffic patterns and calibrate the prediction uncertainty, this paper resorts to the Gaussian process regression (GPR) model and proposes an adaptive kernel learning method. Specifically, we adopt a universal kernel, namely the grid spectral mixture (GSM) kernel, in the GPR model and further propose a novel trans-dimensional kernel learning algorithm by combining optimization and sampling methods to obtain the best GSM kernel configuration, boosting the prediction performance and saving the storage overhead. Experimental results demonstrate that GPR with the proposed adaptive spectral kernel learning algorithm yields superior prediction performance compared to its competitors.
@inproceedings{zhang2022adaptive, title = {Adaptive Gaussian Process Spectral Kernel Learning for 5G Wireless Traffic Prediction}, author = {Zhang, Xinyi and Zhao, Chenxu and Yang, Yun and Lin, Zhidi and Wang, Juntao and Yin, Feng}, booktitle = {2022 IEEE 32nd International Workshop on Machine Learning for Signal Processing (MLSP)}, pages = {1--6}, year = {2022}, organization = {IEEE}, }