uu.seUppsala University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Optimal sensor transmission energy allocation for linear control over a packet dropping link with energy harvesting
Uppsala University, Disciplinary Domain of Science and Technology, Technology, Department of Engineering Sciences, Signals and Systems Group.
Uppsala University, Disciplinary Domain of Science and Technology, Technology, Department of Engineering Sciences, Signals and Systems Group.
2015 (English)In: 2015 54Th IEEE Conference On Decision And Control (CDC), 2015, 1199-1204 p.Conference paper, Published paper (Refereed)
Abstract [en]

This paper studies a closed loop linear control system. The sensor computes a state estimate and sends it to the controller/actuator in the receiver block over a randomly fading packet dropping link. The receiver sends an ACK/NACK packet to the transmitter over a link. It is assumed that the transmission energy per packet at the sensor depletes a battery of limited capacity, replenished by an energy harvester. The objective is to design an optimal energy allocation policy and an optimal control policy so that a finite horizon LQG control cost is minimized. It is shown that in case the receiver to sensor feedback channel is free of errors, a separation principle holds. Hence, the optimal LQG controller is linear, the Kalman filter is optimal and the optimal energy allocation policy is obtained via solving a backward dynamic programming equation. In case the feedback channel is erroneous, the separation principle does not hold. In this case, we propose a suboptimal policy where the controller still uses a linear control, and the transmitter minimizes an expected sum of the trace of an "estimated" receiver state estimation error covariance matrix. Simulations are used to illustrate the relative performance of the proposed algorithms and various heuristic algorithms for both the perfect and imperfect feedback cases. It is seen that the dynamic programming based policies outperform the simple heuristic policies by a margin.

Place, publisher, year, edition, pages
2015. 1199-1204 p.
National Category
Computer Science Engineering and Technology
Identifiers
URN: urn:nbn:se:uu:diva-267424ISI: 000381554501059ISBN: 9781479978861 (print)OAI: oai:DiVA.org:uu-267424DiVA: diva2:873091
Conference
54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, december
Available from: 2015-11-23 Created: 2015-11-23 Last updated: 2017-07-05Bibliographically approved

Open Access in DiVA

No full text

Authority records BETA

Knorn, SteffiDey, Subhrakanti

Search in DiVA

By author/editor
Knorn, SteffiDey, Subhrakanti
By organisation
Signals and Systems Group
Computer ScienceEngineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 331 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf