A Receding Horizon Approach to Cooperative Control in Stochastic Settings

Date: 24 May 2005 - 24 May 2005

Venue: Auditorium, Tower Block, SIMTech, 71 Nanyang Drive


We consider a setting where multiple mobile “agents” form a team cooperating to visit multiple “targets” to collect rewards associated with them. The “agents” may be autonomous vehicles (UAVs) or the nodes in a wireless sensor network. The team’s goal is to execute a certain “mission”. In this presentation, we consider a reward maximisation mission: each target has an associated reward and the objective is to maximise the total reward accumulated over a given time interval by visiting the targets. Complicating factors include uncertainties regarding the locations of targets and the effectiveness of collecting rewards, differences among agent capabilities, and the fact that rewards are time-varying. An event-driven Receding Horizon (RH) control scheme is presented which dynamically determines agent trajectories by solving a sequence of simple optimisation problems over a planning horizon and executing them over a shorter action horizon. Thus, the complexity of target assignment, routing, and trajectory planning is bypassed by combining all three tasks into a single efficiently scalable process. A key property of this scheme is that the trajectories are stationary, in the sense that they ultimately assign agents to targets, even though the controller is not explicitly designed to perform such discrete point assignments. This property is rigorously established using potential field methods. The presentation will include step-by-step simulation demonstrations to illustrate the performance of this approach in stochastic settings. It will also include video clips from the use of this approach in a laboratory setting at Boston University (see http:/ where small mobile wirelessly cooperating robots are programmed to execute reward maximisation missions using a distributed control version of this RH approach.

About Prof Cassandras
Christos G. Cassandras is Professor of Manufacturing Engineering and Professor of Electrical and Computer Engineering at Boston University. He received degrees from Yale University (B.S., 1977), Stanford University (M.S.E.E., 1978), and Harvard University (S.M., 1979; Ph.D., 1982). In 1982-84 he was with ITP Boston, Inc. where he worked on the design of automated manufacturing systems. In 1984-1996 he was a faculty member at the Department of Electrical and Computer Engineering, University of Massachusetts/Amherst. He specialises in the areas of discrete event and hybrid systems, stochastic optimisation, and computer simulation, with applications to computer and sensor networks, manufacturing systems, and transportation systems.

He has published over 200 refereed papers in these areas, and two textbooks. He has guest-edited several technical journal issues and serves on several journal Editorial Boards. Dr. Cassandras is currently Editor-in-Chief of the IEEE Transactions on Automatic Control and has served as Editor for Technical Notes and Correspondence and Associate Editor. He is a member of the IEEE CSS Board of Governors, chaired the CSS Technical Committee on Control Theory, and served as Chair of several conferences, including the 2004 IEEE Conference on Decision and Control. He has been a plenary speaker at various international conferences, including the American Control Conference in 2001 and the IEEE Conference on Decision and Control in 2002. He is the recipient of several awards, including the 1999 Harold Chestnut Prize (IFAC Best Control Engineering Textbook) for Discrete Event Systems: Modelling and Performance Analysis, and a 1991 Lilly Fellowship. He is a member of Phi Beta Kappa and Tau Beta Pi. He is also a Fellow of the IEEE.

Joint organisers : NUS and IEEE, Control, R&A and SMC Chapters
Free admission. All are welcome.

To register, please email your details (Name, Organisation Name, Contact No) to