Optimization Study of Dynamic Weight Adjustment Based on Reinforcement Learning for Trajectory Tracking in Sorting Robots
Research Article
Open Access
CC BY

Optimization Study of Dynamic Weight Adjustment Based on Reinforcement Learning for Trajectory Tracking in Sorting Robots

Shuaizhen Li 1*
1 1School of Electrical and Electronic Engineering, Shijiazhuang University of Railway, Shijiazhuang, 050043, China
*Corresponding author: 1714612631@qq.com
Published on 26 November 2025
Volume Cover
ACE Vol.210
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-80590-567-7
ISBN (Online): 978-1-80590-568-4
Download Cover

Abstract

Trajectory tracking control for sorting robots in dynamic warehouse environments is challenging due to environmental uncertainty and frequent disturbances. The performance of traditional model predictive control (MPC) heavily relies on manually pre-tuned weight parameters in its cost function. This fixed configuration limits the ability to autonomously balance multiple objectives—such as trajectory tracking, obstacle avoidance, and energy consumption—in dynamic settings, thereby constraining adaptability and robustness.To address this, this paper introduces a hierarchical reinforcement learning framework for online autonomous adjustment of MPC weights. The framework consists of a high-level meta-controller and a low-level MPC executor. The high-level controller dynamically adjusts the MPC weight matrix based on the global environment state, enabling intelligent prioritization of control objectives. Comparative experiments in a high-fidelity Gazebo simulation demonstrate that the proposed method outperforms both fixed-weight MPC and PID controllers in tracking accuracy, task efficiency, safety, and energy consumption. The results validate the effectiveness of the approach and reveal an interpretable, learning-based decision-making mechanism, offering a reliable solution for high-performance robot control in dynamic environments.

Keywords:

Sorting robot, Hierarchical reinforcement learning, Model predictive control, Trajectory tracking, Dynamic weight optimization, Proximal policy optimization

View PDF
Li,S. (2025). Optimization Study of Dynamic Weight Adjustment Based on Reinforcement Learning for Trajectory Tracking in Sorting Robots. Applied and Computational Engineering,210,28-35.

References

[1]. Garcia, C. E., Prett, D. M., & Morari, M. (1989). Model predictive control: theory and practice—a survey. Automatica, 25(3), 335–348.

[2]. Ma, L., & Fu, L. (2010). Review of optimal control theory for nonlinear systems. Science and Technology Information.http: //www.cnki.net/ .

[3]. Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction (2nd ed.). MIT Press.

[4]. Sutton, R. S. (1991). Dyna, an Integrated Architecture for Learning, Planning, and Reacting. ACM SIGART Bulletin, 2(4), 160-163.

[5]. Littman, M. L. (1994). Markov Games as a Framework for Multi-Agent Reinforcement Learning. In Proceedings of the 11th International Conference on Machine Learning (pp. 157-163).

Cite this article

Li,S. (2025). Optimization Study of Dynamic Weight Adjustment Based on Reinforcement Learning for Trajectory Tracking in Sorting Robots. Applied and Computational Engineering,210,28-35.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

About volume

Volume title: Proceedings of CONF-MLA 2025 Symposium: Intelligent Systems and Automation: AI Models, IoT, and Robotic Algorithms

ISBN: 978-1-80590-567-7(Print) / 978-1-80590-568-4(Online)
Editor: Hisham AbouGrad
Conference date: 12 November 2025
Series: Applied and Computational Engineering
Volume number: Vol.210
ISSN: 2755-2721(Print) / 2755-273X(Online)