uu.seUppsala University Publications
Change search
Link to record
Permanent link

Direct link
BETA
Malmberg, Filip
Publications (10 of 59) Show all publications
Sandberg Melin, C., Malmberg, F. & Söderberg, P. G. (2019). A strategy for OCT estimation of the optic nerve head pigment epithelium central limit-inner limit of the retina minimal distance, PIMD-2π. Acta Ophthalmologica, 97(2), 208-213
Open this publication in new window or tab >>A strategy for OCT estimation of the optic nerve head pigment epithelium central limit-inner limit of the retina minimal distance, PIMD-2π
2019 (English)In: Acta Ophthalmologica, ISSN 1755-375X, E-ISSN 1755-3768, Vol. 97, no 2, p. 208-213Article in journal (Refereed) Published
Abstract [en]

Purpose To develop a semi-automatic algorithm for estimation of pigment epithelium central limit-inner limit of the retina minimal distance averaged over 2 pi radians (PIMD-2 pi) and to estimate the precision of the algorithm. Further, the variances in estimates of PIMD-2 pi were to be estimated in a pilot sample of glaucomatous eyes. Methods Three-dimensional cubes of the optic nerve head (ONH) were captured with a commercial SD-OCT device. Raw cube data were exported for semi-automatic segmentation. The inner limit of the retina was automatically detected. Custom software aided the delineation of the ONH pigment epithelium central limit resolved in 500 evenly distributed radii. Sources of variation in PIMD estimates were analysed with an analysis of variance. Results The estimated variance for segmentations and angles was 130 mu m(2) and 1280 mu m(2), respectively. Considering averaging eight segmentations, a 95 % confidence interval for mean PIMD-2 pi was estimated to 212 +/- 10 mu m (df = 7). The coefficient of variation for segmentation was estimated at 0.05. In the glaucomatous eyes, the within-subject variance for captured volumes and for segmentations within volumes was 10 mu m(2) and 50 mu m(2), respectively. Conclusion The developed semi-automatic algorithm enables estimation of PIMD-2 pi in glaucomatous eyes with relevant precision using few segmentations of each captured volume.

National Category
Ophthalmology Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-362723 (URN)10.1111/aos.13908 (DOI)000459637900020 ()30198106 (PubMedID)
Funder
Gun och Bertil Stohnes Stiftelse
Available from: 2018-09-10 Created: 2018-10-09 Last updated: 2020-02-20Bibliographically approved
Sjöholm, T., Ekström, S., Strand, R., Ahlström, H., Lind, L., Malmberg, F. & Kullberg, J. (2019). A whole-body FDG PET/MR atlas for multiparametric voxel-based analysis. Scientific Reports, 9, Article ID 6158.
Open this publication in new window or tab >>A whole-body FDG PET/MR atlas for multiparametric voxel-based analysis
Show others...
2019 (English)In: Scientific Reports, ISSN 2045-2322, E-ISSN 2045-2322, Vol. 9, article id 6158Article in journal (Refereed) Published
Abstract [en]

Quantitative multiparametric imaging is a potential key application for Positron Emission Tomography/Magnetic Resonance (PET/MR) hybrid imaging. To enable objective and automatic voxel-based multiparametric analysis in whole-body applications, the purpose of this study was to develop a multimodality whole-body atlas of functional 18F-fluorodeoxyglucose (FDG) PET and anatomical fat-water MR data of adults. Image registration was used to transform PET/MR images of healthy control subjects into male and female reference spaces, producing a fat-water MR, local tissue volume and FDG PET whole-body normal atlas consisting of 12 male (66.6 +/- 6.3 years) and 15 female (69.5 +/- 3.6 years) subjects. Manual segmentations of tissues and organs in the male and female reference spaces confirmed that the atlas contained adequate physiological and anatomical values. The atlas was applied in two anomaly detection tasks as proof of concept. The first task automatically detected anomalies in two subjects with suspected malignant disease using FDG data. The second task successfully detected abnormal liver fat infiltration in one subject using fat fraction data.

National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-382934 (URN)10.1038/s41598-019-42613-z (DOI)000464652400029 ()30992502 (PubMedID)
Available from: 2019-04-16 Created: 2019-05-07 Last updated: 2020-02-05Bibliographically approved
Pilia, M., Kullberg, J., Ahlström, H., Malmberg, F., Ekström, S. & Strand, R. (2019). Average volume reference space for large scale registration of whole-body magnetic resonance images. PLoS ONE, 14(10), Article ID e0222700.
Open this publication in new window or tab >>Average volume reference space for large scale registration of whole-body magnetic resonance images
Show others...
2019 (English)In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 14, no 10, article id e0222700Article in journal (Refereed) Published
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-397325 (URN)10.1371/journal.pone.0222700 (DOI)
Funder
Swedish Research Council, 2016–01040Swedish Heart Lung Foundation, HLF 20170492
Available from: 2019-10-01 Created: 2019-11-19 Last updated: 2019-11-20Bibliographically approved
Ayyalasomayajula, K. R., Wilkinson, T., Malmberg, F. & Brun, A. (2019). CalligraphyNet: Augmenting handwriting generation with quill based stroke width. Paper presented at 26th IEEE International Conference on Image Processing.
Open this publication in new window or tab >>CalligraphyNet: Augmenting handwriting generation with quill based stroke width
2019 (English)Manuscript (preprint) (Other academic)
Abstract [en]

Realistic handwritten document generation garners a lot ofinterest from the document research community for its abilityto generate annotated data. In the current approach we haveused GAN-based stroke width enrichment and style transferbased refinement over generated data which result in realisticlooking handwritten document images. The GAN part of dataaugmentation transfers the stroke variation introduced by awriting instrument onto images rendered from trajectories cre-ated by tracking coordinates along the stylus movement. Thecoordinates from stylus movement are augmented with thelearned stroke width variations during the data augmentationblock. An RNN model is then trained to learn the variationalong the movement of the stylus along with the stroke varia-tions corresponding to an input sequence of characters. Thismodel is then used to generate images of words or sentencesgiven an input character string. A document image thus cre-ated is used as a mask to transfer the style variations of the inkand the parchment. The generated image can capture the colorcontent of the ink and parchment useful for creating annotated data.

National Category
Computer Systems
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-379633 (URN)
Conference
26th IEEE International Conference on Image Processing
Note

Currently under review

Available from: 2019-03-19 Created: 2019-03-19 Last updated: 2019-04-08
Malmberg, F., Ciesielski, K. C. & Strand, R. (2019). Optimization of max-norm objective functions in image processing and computer vision. In: Discrete Geometry for Computer Imagery: . Paper presented at DGCI 2019, March 26–28, Marne-la-Vallée, France (pp. 206-218). Springer
Open this publication in new window or tab >>Optimization of max-norm objective functions in image processing and computer vision
2019 (English)In: Discrete Geometry for Computer Imagery, Springer, 2019, p. 206-218Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
Springer, 2019
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 11414
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-393368 (URN)10.1007/978-3-030-14085-4_17 (DOI)978-3-030-14084-7 (ISBN)
Conference
DGCI 2019, March 26–28, Marne-la-Vallée, France
Available from: 2019-02-23 Created: 2019-09-20 Last updated: 2019-09-20Bibliographically approved
Ayyalasomayajula, K. R., Malmberg, F. & Brun, A. (2019). PDNet: Semantic segmentation integrated with a primal-dual network for document binarization. Pattern Recognition Letters, 121, 52-60
Open this publication in new window or tab >>PDNet: Semantic segmentation integrated with a primal-dual network for document binarization
2019 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 121, p. 52-60Article in journal (Refereed) Published
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-366933 (URN)10.1016/j.patrec.2018.05.011 (DOI)000459876700008 ()
Funder
Swedish Research Council, 2012-5743Riksbankens Jubileumsfond, NHS14-2068:1
Available from: 2018-05-16 Created: 2018-11-27 Last updated: 2019-04-04Bibliographically approved
Nysjö, F., Malmberg, F. & Nyström, I. (2019). RayCaching: Amortized Isosurface Rendering for Virtual Reality. Computer graphics forum (Print)
Open this publication in new window or tab >>RayCaching: Amortized Isosurface Rendering for Virtual Reality
2019 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659Article in journal (Refereed) Epub ahead of print
Abstract [en]

Real‐time virtual reality requires efficient rendering methods to deal with high‐ resolution stereoscopic displays and low latency head‐tracking. Our proposed RayCaching method renders isosurfaces of large volume datasets by amortizing raycasting over several frames and caching primary rays as small bricks that can be efficiently rasterized. An occupancy map in form of a clipmap provides level of detail and ensures that only bricks corresponding to visible points on the isosurface are being cached and rendered. Hard shadows and ambient occlusion from secondary rays are also accumulated and stored in the cache. Our method supports real‐time isosurface rendering with dynamic isovalue and allows stereoscopic visualization and exploration of large volume datasets at framerates suitable for virtual reality applications.

Place, publisher, year, edition, pages
John Wiley & Sons, 2019
Keywords
ray tracing, visibility, point-based models, virtual reality
National Category
Computer Sciences
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-398397 (URN)10.1111/cgf.13762 (DOI)
Available from: 2019-12-05 Created: 2019-12-05 Last updated: 2020-01-23Bibliographically approved
Zhang, J., Malmberg, F. & Sclaroff, S. (2019). Visual Saliency: From Pixel-Level to Object-Level Analysis. Springer
Open this publication in new window or tab >>Visual Saliency: From Pixel-Level to Object-Level Analysis
2019 (English)Book (Refereed)
Place, publisher, year, edition, pages
Springer, 2019
National Category
Computer and Information Sciences
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-397326 (URN)10.1007/978-3-030-04831-0 (DOI)978-3-030-04830-3 (ISBN)
Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2019-11-20Bibliographically approved
Guglielmo, P., Sjöholm, T., Enblad, G., Strand, R., Kullberg, J., Malberg, F. & Ahlström, H. (2018). Imiomics Using Whole-body FDG PET/MR in Staging and Treatment Response Evaluation of Non-Hodgkin Lymphoma Patients Treated With CAR-T Cells. Paper presented at 31st Annual Congress of the European-Association-of-Nuclear-Medicine (EANM), OCT 13-17, 2018, Dusseldorf, GERMANY. European Journal of Nuclear Medicine and Molecular Imaging, 45, S37-S38
Open this publication in new window or tab >>Imiomics Using Whole-body FDG PET/MR in Staging and Treatment Response Evaluation of Non-Hodgkin Lymphoma Patients Treated With CAR-T Cells
Show others...
2018 (English)In: European Journal of Nuclear Medicine and Molecular Imaging, ISSN 1619-7070, E-ISSN 1619-7089, Vol. 45, p. S37-S38Article in journal, Meeting abstract (Other academic) Published
Place, publisher, year, edition, pages
Springer, 2018
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:uu:diva-372960 (URN)000449266200052 ()
Conference
31st Annual Congress of the European-Association-of-Nuclear-Medicine (EANM), OCT 13-17, 2018, Dusseldorf, GERMANY
Available from: 2019-01-24 Created: 2019-01-24 Last updated: 2019-01-24Bibliographically approved
Blache, L., Nysjö, F., Malmberg, F., Thor, A., Rodriguez-Lorenzo, A. & Nyström, I. (2018). SoftCut:: A Virtual Planning Tool for Soft Tissue Resection on CT Images. In: Mark Nixon; Sasan Mahmoodi; Reyer Zwiggelaar (Ed.), Medical Image Understanding and Analysis: . Paper presented at 22nd Medical Image Understanding and Analysis (MIUA), Southampton, UK, 2018 (pp. 299-310). Cham: Springer, 894
Open this publication in new window or tab >>SoftCut:: A Virtual Planning Tool for Soft Tissue Resection on CT Images
Show others...
2018 (English)In: Medical Image Understanding and Analysis / [ed] Mark Nixon; Sasan Mahmoodi; Reyer Zwiggelaar, Cham: Springer, 2018, Vol. 894, p. 299-310Conference paper, Published paper (Refereed)
Abstract [en]

With the increasing use of three-dimensional (3D) models and Computer Aided Design (CAD) in the medical domain, virtual surgical planning is now frequently used. Most of the current solutions focus on bone surgical operations. However, for head and neck oncologic resection, soft tissue ablation and reconstruction are common operations. In this paper, we propose a method to provide a fast and efficient estimation of shape and dimensions of soft tissue resections. Our approach takes advantage of a simple sketch-based interface which allows the user to paint the contour of the resection on a patient specific 3D model reconstructed from a computed tomography (CT) scan. The volume is then virtually cut and carved following this pattern. From the outline of the resection defined on the skin surface as a closed curve, we can identify which areas of the skin are inside or outside this shape. We then use distance transforms to identify the soft tissue voxels which are closer from the inside of this shape. Thus, we can propagate the shape of the resection inside the soft tissue layers of the volume. We demonstrate the usefulness of the method on patient specific CT data.

Place, publisher, year, edition, pages
Cham: Springer, 2018
Series
Communications in Computer and Information Science
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-364351 (URN)10.1007/978-3-319-95921-4_28 (DOI)978-3-319-95920-7 (ISBN)
Conference
22nd Medical Image Understanding and Analysis (MIUA), Southampton, UK, 2018
Available from: 2018-10-25 Created: 2018-10-25 Last updated: 2019-03-14Bibliographically approved
Organisations

Search in DiVA

Show all publications