uu.seUppsala University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Optimal energy allocation for linear control with packet loss under energy harvesting constraints
Uppsala University, Disciplinary Domain of Science and Technology, Technology, Department of Engineering Sciences, Signals and Systems Group.
Uppsala University, Disciplinary Domain of Science and Technology, Technology, Department of Engineering Sciences, Signals and Systems Group.
2017 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 77, 259-267 p.Article in journal (Refereed) Published
Abstract [en]

This paper studies a closed loop linear control system over a lossy communication link. A sensor computes a state estimate of the observed discrete-time system and sends it (in the form of packetized transmission) to the controller in the receiver block over a randomly time-varying (fading) packet dropping link. The receiver sends an ACK/NACK packet to the transmitter over an acknowledgement channel which might also be prone to packet loss. It is assumed that the energy used in packet transmission depletes a battery of limited capacity at the sensor, but is also replenished by an energy harvester which has access to a source of everlasting but random harvested energy. Under an assumption of finite-state Markov chain models of the energy harvesting and the fading channel gain processes, the objective is to design an optimal energy allocation policy at the transmitter and an optimal control policy at the receiver so that an average infinite horizon linear quadratic Gaussian (LQG) control cost is minimized. It is shown that in the case of perfect channel feedback a separation principle holds, the optimal LQG controller is linear and the optimal energy allocation policy at the transmitter can be obtained via solving the Bellman dynamic programming equation. A Q-learning algorithm is used to approximate the optimal energy allocation policy in case the system parameters are unknown. Numerical simulation examples show that the dynamic programming based policies outperform various simple heuristic policies, especially at higher battery capacities.

Place, publisher, year, edition, pages
2017. Vol. 77, 259-267 p.
Keyword [en]
Kalman filtering (KF), Energy harvesting, Imperfect acknowledgements
National Category
Computer and Information Science Engineering and Technology
Identifiers
URN: urn:nbn:se:uu:diva-306985DOI: 10.1016/j.automatica.2016.11.036ISI: 000395354700028OAI: oai:DiVA.org:uu-306985DiVA: diva2:1045043
Available from: 2016-11-08 Created: 2016-11-08 Last updated: 2017-04-28Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Knorn, SteffiDey, Subhrakanti
By organisation
Signals and Systems Group
In the same journal
Automatica
Computer and Information ScienceEngineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 367 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf