Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
Link to record
Permanent link

Direct link
Schön, Thomas B., ProfessorORCID iD iconorcid.org/0000-0001-5183-234X
Alternative names
Publications (10 of 133) Show all publications
Baumann, D. & Schön, T. B. (2024). Safe Reinforcement Learning in Uncertain Contexts. IEEE Transactions on robotics, 40, 1828-1841
Open this publication in new window or tab >>Safe Reinforcement Learning in Uncertain Contexts
2024 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 40, p. 1828-1841Article in journal (Refereed) Published
Abstract [en]

When deploying machine learning algorithms in the real world, guaranteeing safety is an essential asset. Existing safe learning approaches typically consider continuous variables, i.e., regression tasks. However, in practice, robotic systems are also subject to discrete, external environmental changes, e.g., having to carry objects of certain weights or operating on frozen, wet, or dry surfaces. Such influences can be modeled as discrete context variables. In the existing literature, such contexts are, if considered, mostly assumed to be known. In this work, we drop this assumption and show how we can perform safe learning when we cannot directly measure the context variables. To achieve this, we derive frequentist guarantees for multiclass classification, allowing us to estimate the current context from measurements. Furthermore, we propose an approach for identifying contexts through experiments. We discuss under which conditions we can retain theoretical guarantees and demonstrate the applicability of our algorithm on a Furuta pendulum with camera measurements of different weights that serve as contexts.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Heuristic algorithms, Robots, Safety, Uncertainty, Current measurement, Cameras, Dynamical systems, Frequentist bounds, multiclass classification, safe reinforcement learning
National Category
Control Engineering Computer Sciences
Identifiers
urn:nbn:se:uu:diva-526072 (URN)10.1109/TRO.2024.3354176 (DOI)001175784100001 ()
Available from: 2024-04-05 Created: 2024-04-05 Last updated: 2024-04-05Bibliographically approved
Martin, T., Schön, T. B. & Allgöwer, F. (2023). Guarantees for data-driven control of nonlinear systems using semidefinite programming: A survey. Annual Reviews in Control, 56, Article ID 100911.
Open this publication in new window or tab >>Guarantees for data-driven control of nonlinear systems using semidefinite programming: A survey
2023 (English)In: Annual Reviews in Control, ISSN 1367-5788, E-ISSN 1872-9088, Vol. 56, article id 100911Article, review/survey (Refereed) Published
Abstract [en]

This survey presents recent research on determining control-theoretic properties and designing controllers with rigorous guarantees using semidefinite programming and for nonlinear systems for which no mathematical models but measured trajectories are available. Data-driven control techniques have been developed to circumvent a time-consuming modelling by first principles and because of the increasing availability of data. Recently, this research field has gained increased attention by the application of Willems' fundamental lemma, which provides a fertile ground for the development of data-driven control schemes with guarantees for linear time-invariant systems. While the fundamental lemma can be generalized to further system classes, there does not exist a comparable data-based system representation for nonlinear systems. At the same time, nonlinear systems constitute the majority of practical systems. Moreover, they include additional challenges such as data-based surrogate models that prevent system analysis and controller design by convex optimization. Therefore, a variety of data-driven control approaches has been developed with different required prior insights into the system to ensure a guaranteed inference. In this survey, we will discuss developments in the context of data-driven control for nonlinear systems. In particular, we will focus on methods based on system representations providing guarantees from finite data, while the analysis and the controller design boil down to convex optimization problems given as semidefinite programming. Thus, these approaches achieve reasonable advances compared to the state-of-the-art system analysis and controller design by models from system identification. Specifically, the paper covers system representations based on extensions of Willems' fundamental lemma, set membership, kernel techniques, the Koopman operator, and feedback linearization.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Data-driven control, Data-driven system analysis, Nonlinear systems, Semidefinite programming
National Category
Control Engineering
Identifiers
urn:nbn:se:uu:diva-521220 (URN)10.1016/j.arcontrol.2023.100911 (DOI)001138794700001 ()
Available from: 2024-01-22 Created: 2024-01-22 Last updated: 2024-01-22Bibliographically approved
Gustafsson, F. K., Danelljan, M. & Schön, T. B. (2023). How Reliable is Your Regression Model’s Uncertainty Under Real-World Distribution Shifts?. Transactions on Machine Learning Research
Open this publication in new window or tab >>How Reliable is Your Regression Model’s Uncertainty Under Real-World Distribution Shifts?
2023 (English)In: Transactions on Machine Learning Research, E-ISSN 2835-8856Article in journal (Refereed) Published
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:uu:diva-513724 (URN)
Available from: 2023-10-10 Created: 2023-10-10 Last updated: 2024-01-08Bibliographically approved
Henningsson, A., Wills, A. G., Hall, S. A., Hendriks, J., Wright, J. P., Schön, T. B. & Poulsen, H. F. (2023). Inferring the probability distribution over strain tensors in polycrystals from diffraction based measurements. Computer Methods in Applied Mechanics and Engineering, 417(Part A), Article ID 116417.
Open this publication in new window or tab >>Inferring the probability distribution over strain tensors in polycrystals from diffraction based measurements
Show others...
2023 (English)In: Computer Methods in Applied Mechanics and Engineering, ISSN 0045-7825, E-ISSN 1879-2138, Vol. 417, no Part A, article id 116417Article in journal (Refereed) Published
Abstract [en]

Polycrystals illuminated by high-energy X-rays or neutrons produce diffraction patterns in which the measured diffraction peaks encode the individual single crystal strain states. While state of the art X-ray and neutron diffraction approaches can be used to routinely recover per grain mean strain tensors, less work has been produced on the recovery of higher order statistics of the strain distributions across the individual grains. In the setting of small deformations, we consider the problem of estimating the crystal elastic strain tensor probability distribution from diffraction data. For the special case of multivariate Gaussian strain tensor probability distributions, we show that while the mean of the distribution is well defined from measurements, the covariance of strain has a null-space. We show that there exist exactly 6 orthogonal perturbations to this covariance matrix under which the measured strain signal is invariant. In particular, we provide analytical parametrisations of these perturbations together with the set of possible maximum-likelihood estimates for a multivariate Gaussian fit to data. The parametric description of the null-space provides insights into the strain PDF modes that cannot be accurately estimated from the diffraction data. Understanding these modes prevents erroneous conclusions from being drawn based on the data. Beyond Gaussian strain tensor probability densities, we derive an iterative radial basis regression scheme in which the strain tensor probability density is estimated by a sparse finite basis expansion. This is made possible by showing that the operator mapping the strain tensor probability density onto the measured histograms of directional strain is linear, without approximation. The utility of the proposed algorithm is demonstrated by numerical simulations in the setting of single crystal monochromatic X-ray scattering. The proposed regression methods were found to robustly reject outliers and accurately predict the strain tensor probability distributions in the presence of Gaussian measurement noise.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Diffraction, Estimation, Strain tensor, X-rays, Polycrystals, Probability distributions
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:uu:diva-515588 (URN)10.1016/j.cma.2023.116417 (DOI)001080509400001 ()
Funder
Swedish Research Council, 2017-06719
Available from: 2023-11-08 Created: 2023-11-08 Last updated: 2023-11-08Bibliographically approved
Gedon, D., Ribeiro, A. H., Wahlström, N. & Schön, T. B. (2023). Invertible Kernel PCA With Random Fourier Features. IEEE Signal Processing Letters, 30, 563-567
Open this publication in new window or tab >>Invertible Kernel PCA With Random Fourier Features
2023 (English)In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 30, p. 563-567Article in journal (Refereed) Published
Abstract [en]

Kernel principal component analysis (kPCA) is a widely studied method to construct a low-dimensional data representation after a nonlinear transformation. The prevailing method to reconstruct the original input signal from kPCA-an important task for denoising-requires us to solve a supervised learning problem. In this paper, we present an alternative method where the reconstruction follows naturally from the compression step. We first approximate the kernel with random Fourier features. Then, we exploit the fact that the nonlinear transformation is invertible in a certain subdomain. Hence, the name invertible kernel PCA (ikPCA). We experiment with different data modalities and show that ikPCA performs similarly to kPCA with supervised reconstruction on denoising tasks, making it a strong alternative.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Principal component analysis, Kernel, Image reconstruction, Dimensionality reduction, Noise reduction, Electrocardiography, Toy manufacturing industry, Denoising, ECG, Index Terms, kernel PCA, pre-image, random Fourier features, reconstruction
National Category
Signal Processing
Identifiers
urn:nbn:se:uu:diva-507434 (URN)10.1109/LSP.2023.3275499 (DOI)001010346600002 ()
Funder
Knut and Alice Wallenberg FoundationSwedish Research Council, 202104321
Available from: 2023-07-11 Created: 2023-07-11 Last updated: 2024-04-07Bibliographically approved
Conde, M. V., Luo, Z., Gustafsson, F. K., Zhao, Z., Sjölund, J., Schön, T. B. & Niu, J. (2023). Lens-to-Lens Bokeh Effect Transformation: NTIRE 2023 Challenge Report. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: . Paper presented at 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, 17-24 June 2023 (pp. 1643-1659). Vancover: Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Lens-to-Lens Bokeh Effect Transformation: NTIRE 2023 Challenge Report
Show others...
2023 (English)In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Vancover: Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 1643-1659Conference paper, Published paper (Other academic)
Abstract [en]

We present the new Bokeh Effect Transformation Dataset (BETD), and review the proposed solutions for this novel task at the NTIRE 2023 Bokeh Effect Transformation Challenge. Recent advancements of mobile photography aim to reach the visual quality of full-frame cameras. Now, a goal in computational photography is to optimize the Bokeh effect itself, which is the aesthetic quality of the blur in out-of-focus areas of an image. Photographers create this aesthetic effect by benefiting from the lens optical properties. The aim of this work is to design a neural network capable of converting the the Bokeh effect of one lens to the effect of another lens without harming the sharp foreground regions in the image. For a given input image, knowing the target lens type, we render or transform the Bokeh effect accordingly to the lens properties. We build the BETD using two full-frame Sony cameras, and diverse lens setups. To the best of our knowledge, we are the first attempt to solve this novel task, and we provide the first BETD dataset and benchmark for it. The challenge had 99 registered participants. The submitted methods gauge the state-of-the-art in Bokeh effect rendering and transformation.

Place, publisher, year, edition, pages
Vancover: Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Signal Processing
Identifiers
urn:nbn:se:uu:diva-517642 (URN)10.1109/CVPRW59228.2023.00166 (DOI)
Conference
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, 17-24 June 2023
Available from: 2023-12-11 Created: 2023-12-11 Last updated: 2023-12-14Bibliographically approved
Wullt, B., Mattsson, P., Schön, T. B. & Norrlöf, M. (2023). Neural motion planning in dynamic environments. In: IFAC-PapersOnLine: . Paper presented at IFAC World Congress (pp. 10126-10131). Elsevier
Open this publication in new window or tab >>Neural motion planning in dynamic environments
2023 (English)In: IFAC-PapersOnLine, Elsevier, 2023, p. 10126-10131Conference paper, Published paper (Refereed)
Abstract [en]

Motion planning is a mature field within robotics with many successful solutions. Despite this, current state-of-the-art planners are still computationally heavy. To address this, recent work have employed ideas from machine learning, which have drastically reduced the computational cost once a planner has been trained. It is mainly static environments that have been studied in this way. We continue along the same research direction but expand the problem to include dynamic environments, hence increasing the difficulty of the problem. Analogously to previous work, we use imitation learning, where a planning policy is learnt from an expert planner in a supervised manner. Our main contribution is a planner mimicking an expert that considers the future movement of all the obstacles in the environment, which is key in order to learn a successful policy in dynamic environments. We illustrate this by evaluating our approach in a dynamic environment and by comparing our planner with a conventional planner that re-plans at every iteration, which is a common approach in dynamic motion planning. We observe that our approach yields a higher success rate, while also taking less time and accumulating less distance to reach the goal.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Data-driven control, Learning for control, Robots manipulators, Motion planning, Imitation learning
National Category
Control Engineering
Identifiers
urn:nbn:se:uu:diva-518375 (URN)10.1016/j.ifacol.2023.10.885 (DOI)
Conference
IFAC World Congress
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2023-12-18 Created: 2023-12-18 Last updated: 2023-12-24Bibliographically approved
Wang, L., Luo, Z., Gustafsson, F. K., Zhao, Z., Sjölund, J., Schön, T. B. & Zhang, W. (2023). NTIRE 2023 Challenge on Stereo Image Super-Resolution: Methods and Results. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW): . Paper presented at 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 17-24 June, 2023 (pp. 1346-1372). Vancover: Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>NTIRE 2023 Challenge on Stereo Image Super-Resolution: Methods and Results
Show others...
2023 (English)In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancover: Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 1346-1372Conference paper, Published paper (Other academic)
Abstract [en]

In this paper, we summarize the 2nd NTIRE challenge on stereo image super-resolution (SR) with a focus on new solutions and results. The task of the challenge is to super-resolve a low-resolution stereo image pair to a high-resolution one with a magnification factor of x4. Compared with single image SR, the major challenge of this challenge lies in how to exploit additional information in another viewpoint and how to maintain stereo consistency in the results. This challenge has 3 tracks, including one track on distortion (e.g., PSNR) and bicubic degradation, one track on perceptual quality (e.g., LPIPS) and bicubic degradation, as well as another track on real degradations. In total, 175, 93, and 103 participants were successfully registered for each track, respectively. In the test phase, 21, 17, and 12 teams successfully submitted results with PSNR (RGB) scores better than the baseline. This challenge establishes a new benchmark for stereo image SR.

Place, publisher, year, edition, pages
Vancover: Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE Computer Society Conference on Computer Vision and Pattern Recognition workshops : proceedings, E-ISSN 2160-7516
Keywords
stereo image super-resolution
National Category
Signal Processing
Identifiers
urn:nbn:se:uu:diva-517639 (URN)10.1109/CVPRW59228.2023.00141 (DOI)979-8-3503-0249-3 (ISBN)979-8-3503-0250-9 (ISBN)
Conference
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 17-24 June, 2023
Available from: 2023-12-11 Created: 2023-12-11 Last updated: 2023-12-13Bibliographically approved
Ancuti, C. O., Luo, Z., Gustafsson, F. K., Zhao, Z., Sjölund, J., Schön, T. B. & Busch, C. (2023). NTIRE 2023 HR NonHomogeneous Dehazing Challenge Report. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW): . Paper presented at IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Vancouver, BC, 17-24 June, 2023. Vancover: Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>NTIRE 2023 HR NonHomogeneous Dehazing Challenge Report
Show others...
2023 (English)In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancover: Institute of Electrical and Electronics Engineers (IEEE), 2023Conference paper, Published paper (Refereed)
Abstract [en]

This study assesses the outcomes of the NTIRE 2023 Challenge on Non-Homogeneous Dehazing, wherein novel techniques were proposed and evaluated on new image dataset called HD-NH-HAZE. The HD-NH-HAZE dataset contains 50 high resolution pairs of real-life outdoor images featuring nonhomogeneous hazy images and corresponding haze-free images of the same scene. The nonhomogeneous haze was simulated using a professional setup that replicated real-world conditions of hazy scenarios. The competition had 246 participants and 17 teams that competed in the final testing phase, and the proposed solutions demonstrated the cutting-edge in image dehazing technology.

Place, publisher, year, edition, pages
Vancover: Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Signal Processing
Identifiers
urn:nbn:se:uu:diva-517647 (URN)10.1109/CVPRW59228.2023.00180 (DOI)
Conference
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Vancouver, BC, 17-24 June, 2023
Available from: 2023-12-11 Created: 2023-12-11 Last updated: 2023-12-14Bibliographically approved
Vasluianu, F.-A. -., Luo, Z., Gustafsson, F. K., Zhao, Z., Sjölund, J., Schön, T. B. & Xia, S. (2023). NTIRE 2023 Image Shadow Removal Challenge Report. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW): . Paper presented at 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, 17-24 June, 2023 (pp. 1788-1807). Vancover: Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>NTIRE 2023 Image Shadow Removal Challenge Report
Show others...
2023 (English)In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancover: Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 1788-1807Conference paper, Published paper (Other academic)
Abstract [en]

This work reviews the results of the NTIRE 2023 Challenge on Image Shadow Removal. The described set of solutions were proposed for a novel dataset, which captures a wide range of object-light interactions. It consists of 1200 roughly pixel aligned pairs of real shadow free and shadow affected images, captured in a controlled environment. The data was captured in a white-box setup, using professional equipment for lights and data acquisition. The challenge had a number of 144 participants registered, out of which 19 teams were compared in the final ranking. The proposed solutions extend the work on shadow removal, improving over well-established state-of-the-art methods.

Place, publisher, year, edition, pages
Vancover: Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Signal Processing
Identifiers
urn:nbn:se:uu:diva-517646 (URN)10.1109/CVPRW59228.2023.00179 (DOI)
Conference
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, 17-24 June, 2023
Available from: 2023-12-11 Created: 2023-12-11 Last updated: 2023-12-14Bibliographically approved
Projects
Probabilistic modeling of dynamical systems [2013-05524_VR]; Uppsala UniversityLearning flexible models of nonlinear dynamics [2017-03807_VR]; Uppsala University
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-5183-234X

Search in DiVA

Show all publications

Profile pages

Personal website