
Sample-Efficient Reinforcement Learning with loglog (T) Switching Cost
Dan Qiao, Ming Yin, Ming Min, Yu-Xiang Wang
ICML 2022 spotlight.
Hello, welcome to my homepage! I am currently a forth-year PhD student at the Department of Computer Science and Engineering in University of California, San Diego. I am lucky to be advised by Professor Yu-Xiang Wang. Before transferring to UCSD, I spent the first three years of my PhD at the Department of Computer Science, UCSB. Even before that, I received my Bachelor’s degree in Mathematics and Statistics from Peking University.
My research has been focused on statistical learning theory, including theories of reinforcement learning, differential privacy and deep learning. In particular, I am most interested in online reinforcement learning with low adaptivity (switching cost, batch complexity, deployment complexity, etc.) and differentially private reinforcement learning. In addition, I also worked on designing differentially private algorithms for various applications. Most recently, I begin working on the generalization ability of neural networks.
Dan Qiao, Ming Yin, Ming Min, Yu-Xiang Wang
ICML 2022 spotlight.
Fuheng Zhao*, Dan Qiao*, Rachel Redberg, Divyakant Agrawal, Amr El Abbadi, Yu-Xiang Wang
NeurIPS 2022.
Dan Qiao, Yu-Xiang Wang
ICLR 2023.
Dan Qiao, Yu-Xiang Wang
NeurIPS 2023.
Jianyu Xu, Dan Qiao, Yu-Xiang Wang
AISTATS 2023.
Dan Qiao, Yu-Xiang Wang
AISTATS 2023.
Dan Qiao, Ming Yin, Yu-Xiang Wang
ISIT 2024.
Dan Qiao, Yu-Xiang Wang
ICML 2024.
Dan Qiao, Kaiqi Zhang, Esha Singh, Daniel Soudry, Yu-Xiang Wang
NeurIPS 2024 spotlight.
Dan Qiao, Yu-Xiang Wang
NeurIPS 2024.