uu.seUppsala University Publications
Change search
Link to record
Permanent link

Direct link
BETA
Nyström, Ingela
Alternative names
Publications (10 of 93) Show all publications
Nedelcu, R., Olsson, P., Nyström, I., Rydén, J. & Thor, A. (2018). Accuracy and precision of 3 intraoral scanners and accuracy of conventional impressions: A novel in vivo analysis method. Journal of Dentistry, 69, 110-118
Open this publication in new window or tab >>Accuracy and precision of 3 intraoral scanners and accuracy of conventional impressions: A novel in vivo analysis method
Show others...
2018 (English)In: Journal of Dentistry, ISSN 0300-5712, E-ISSN 1879-176X, Vol. 69, p. 110-118Article in journal (Refereed) Published
Abstract [en]

Objective: To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo.

Methods: Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison.

Results: Precision of ATOS reference scanner (mean 0.6 mu m) and D1000 (mean 0.5 mu m) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision.

Conclusion: The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Clinical significance: Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans.

Place, publisher, year, edition, pages
ELSEVIER SCI LTD, 2018
Keywords
Digital impression, Intraoral scanner, Polyether impression, Accuracy, Precision, In vivo
National Category
Dentistry
Identifiers
urn:nbn:se:uu:diva-349829 (URN)10.1016/j.jdent.2017.12.006 (DOI)000425888000014 ()29246490 (PubMedID)
Available from: 2018-05-07 Created: 2018-05-07 Last updated: 2018-05-07Bibliographically approved
Nedelcu, R., Olsson, P., Nyström, I. & Thor, A. (2018). Finish line distinctness and accuracy in 7 intraoral scanners versus conventional impression: an in vitro descriptive comparison. BMC Oral Health, 18, Article ID 27.
Open this publication in new window or tab >>Finish line distinctness and accuracy in 7 intraoral scanners versus conventional impression: an in vitro descriptive comparison
2018 (English)In: BMC Oral Health, ISSN 1472-6831, E-ISSN 1472-6831, Vol. 18, article id 27Article in journal (Refereed) Published
National Category
Surgery Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-348983 (URN)10.1186/s12903-018-0489-3 (DOI)000426323900001 ()29471825 (PubMedID)
Available from: 2018-02-23 Created: 2018-04-26 Last updated: 2018-04-27Bibliographically approved
Blache, L., Nysjö, F., Malmberg, F., Thor, A., Rodriguez-Lorenzo, A. & Nyström, I. (2018). SoftCut: A Virtual Planning Tool for Soft Tissue Resection on CT Images. In: Mark Nixon, Sasan Mahmoodi, and Reyer Zwiggelaar (Ed.), Medical Image Understanding and Analysis: . Paper presented at 22nd Medical Image Understanding and Analysis (MIUA) 2018 (pp. 299-310). Cham: Springer, 894
Open this publication in new window or tab >>SoftCut: A Virtual Planning Tool for Soft Tissue Resection on CT Images
Show others...
2018 (English)In: Medical Image Understanding and Analysis / [ed] Mark Nixon, Sasan Mahmoodi, and Reyer Zwiggelaar, Cham: Springer, 2018, Vol. 894, p. 299-310Conference paper, Published paper (Refereed)
Abstract [en]

With the increasing use of three-dimensional (3D) models and Computer Aided Design (CAD) in the medical domain, virtual surgical planning is now frequently used. Most of the current solutions focus on bone surgical operations. However, for head and neck oncologic resection, soft tissue ablation and reconstruction are common operations. In this paper, we propose a method to provide a fast and efficient estimation of shape and dimensions of soft tissue resections. Our approach takes advantage of a simple sketch-based interface which allows the user to paint the contour of the resection on a patient specific 3D model reconstructed from a computed tomography (CT) scan. The volume is then virtually cut and carved following this pattern. From the outline of the resection defined on the skin surface as a closed curve, we can identify which areas of the skin are inside or outside this shape. We then use distance transforms to identify the soft tissue voxels which are closer from the inside of this shape. Thus, we can propagate the shape of the resection inside the soft tissue layers of the volume. We demonstrate the usefulness of the method on patient specific CT data.

Place, publisher, year, edition, pages
Cham: Springer, 2018
Series
Communications in Computer and Information Science
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-364351 (URN)10.1007/978-3-319-95921-4_28 (DOI)978-3-319-95920-7 (ISBN)
Conference
22nd Medical Image Understanding and Analysis (MIUA) 2018
Available from: 2018-10-25 Created: 2018-10-25 Last updated: 2018-10-25
Nyström, I., Nysjö, J., Thor, A. & Malmberg, F. (2017). BoneSplit – A 3D painting tool for interactive bone segmentation in CT images. In: Pattern Recognition and Information Processing: PRIP 2016. Paper presented at PRIP 2016, October 3–5, Minsk, Belarus (pp. 3-13). Springer
Open this publication in new window or tab >>BoneSplit – A 3D painting tool for interactive bone segmentation in CT images
2017 (English)In: Pattern Recognition and Information Processing: PRIP 2016, Springer, 2017, p. 3-13Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
Springer, 2017
Series
Communications in Computer and Information Science ; 673
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-317762 (URN)10.1007/978-3-319-54220-1_1 (DOI)978-3-319-54219-5 (ISBN)
Conference
PRIP 2016, October 3–5, Minsk, Belarus
Available from: 2017-02-17 Created: 2017-03-17 Last updated: 2017-03-17Bibliographically approved
Beltrán-Castañón, C., Nyström, I. & Famili, F. (Eds.). (2017). Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. Paper presented at CIARP 2016, November 8–11, Lima, Peru. Springer
Open this publication in new window or tab >>Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
2017 (English)Conference proceedings (editor) (Refereed)
Place, publisher, year, edition, pages
Springer, 2017
Series
Lecture Notes in Computer Science ; 10125
National Category
Computer and Information Sciences
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-317758 (URN)10.1007/978-3-319-52277-7 (DOI)978-3-319-52276-0 (ISBN)
Conference
CIARP 2016, November 8–11, Lima, Peru
Available from: 2017-02-16 Created: 2017-03-17 Last updated: 2018-01-13Bibliographically approved
Svensson, L., Svensson, S., Nyström, I., Nysjö, F., Nysjö, J., Laloeuf, A., . . . Sintorn, I.-M. (2017). ProViz: a tool for explorative 3-D visualization and template matching in electron tomograms. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION, 5(6), 446-454
Open this publication in new window or tab >>ProViz: a tool for explorative 3-D visualization and template matching in electron tomograms
Show others...
2017 (English)In: COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION, ISSN 2168-1163, Vol. 5, no 6, p. 446-454Article in journal (Refereed) Published
Abstract [en]

Visual understanding is a key aspect when studying electron tomography data-sets, aside quantitative assessments such as registration of high-resolution structures. We here present the free software tool ProViz (Protein Visualization) for visualisation and templatematching in electron tomograms of biological samples. The ProViz software contains methods and tools which we have developed, adapted and computationally optimised for easy and intuitive visualisation and analysis of electron tomograms with low signal-to-noise ratio. ProViz complements existing software in the application field and serves as an easy and convenient tool for a first assessment and screening of the tomograms. It provides enhancements in three areas: (1) improved visualisation that makes connections as well as intensity differences between and within objects or structures easier to see and interpret, (2) interactive transfer function editing with direct visual result feedback using both piecewise linear functions and Gaussian function elements, (3) computationally optimised template matching and tools to visually assess and interactively explore the correlation results. The visualisation capabilities and features of ProViz are demonstrated on various biological volume data-sets: bacterial filament structures in vitro, a desmosome and the transmembrane cadherin connections therein in situ, and liposomes filled with doxorubicin in solution. The explorative template matching is demonstrated on a synthetic IgG data-set.

Keywords
Electron tomography, direct volume rendering, image registration, connected component filtering, visualisation and analysis software
National Category
Bioinformatics (Computational Biology)
Identifiers
urn:nbn:se:uu:diva-359635 (URN)10.1080/21681163.2016.1154483 (DOI)000428130400009 ()
Funder
Swedish Foundation for Strategic Research VINNOVA
Available from: 2018-09-05 Created: 2018-09-05 Last updated: 2018-09-05Bibliographically approved
Nysjö, F., Olsson, P., Malmberg, F., Carlbom, I. B. & Nyström, I. (2017). Using anti-aliased signed distance fields for generating surgical guides and plates from CT images. Journal of WSCG, 25(1), 11-20
Open this publication in new window or tab >>Using anti-aliased signed distance fields for generating surgical guides and plates from CT images
Show others...
2017 (English)In: Journal of WSCG, ISSN 1213-6972, E-ISSN 1213-6964, Vol. 25, no 1, p. 11-20Article in journal (Refereed) Published
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-335346 (URN)
Available from: 2017-12-04 Created: 2017-12-04 Last updated: 2017-12-15Bibliographically approved
Christersson, A., Nysjö, J., Berglund, L., Malmberg, F., Sintorn, I.-M., Nyström, I. & Larsson, S. (2016). Comparison of 2D radiography and a semi-automatic CT-based 3D method for measuring change in dorsal angulation over time in distal radius fractures. Skeletal Radiology, 45(6), 763-769
Open this publication in new window or tab >>Comparison of 2D radiography and a semi-automatic CT-based 3D method for measuring change in dorsal angulation over time in distal radius fractures
Show others...
2016 (English)In: Skeletal Radiology, ISSN 0364-2348, E-ISSN 1432-2161, Vol. 45, no 6, p. 763-769Article in journal (Refereed) Published
Abstract [en]

Objective The aim of the present study was to compare the reliability and agreement between a computer tomography-based method (CT) and digitalised 2D radiographs (XR) when measuring change in dorsal angulation over time in distal radius fractures. Materials and methods Radiographs from 33 distal radius fractures treated with external fixation were retrospectively analysed. All fractures had been examined using both XR and CT at six times over 6 months postoperatively. The changes in dorsal angulation between the first reference images and the following examinations in every patient were calculated from 133 follow-up measurements by two assessors and repeated at two different time points. The measurements were analysed using Bland-Altman plots, comparing intra- and inter-observer agreement within and between XR and CT. Results The mean differences in intra- and inter-observer measurements for XR, CT, and between XR and CT were close to zero, implying equal validity. The average intra- and inter-observer limits of agreement for XR, CT, and between XR and CT were +/- 4.4 degrees, +/- 1.9 degrees and +/- 6.8 degrees respectively. Conclusions For scientific purpose, the reliability of XR seems unacceptably low when measuring changes in dorsal angulation in distal radius fractures, whereas the reliability for the semi-automatic CT-based method was higher and is therefore preferable when a more precise method is requested.

National Category
Orthopaedics Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-297776 (URN)10.1007/s00256-016-2350-6 (DOI)000374476200003 ()26922189 (PubMedID)
Available from: 2016-02-27 Created: 2016-06-28 Last updated: 2018-05-14Bibliographically approved
Paetzel, M., Peters, C., Nyström, I. & Castellano, G. (2016). Congruency Matters – How ambiguous gender cues increase a robot’s uncanniness. In: Social Robotics: . Paper presented at ICSR 2016, November 1–3, Kansas City, MO (pp. 402-412). Springer
Open this publication in new window or tab >>Congruency Matters – How ambiguous gender cues increase a robot’s uncanniness
2016 (English)In: Social Robotics, Springer, 2016, p. 402-412Conference paper, Published paper (Refereed)
Abstract [en]

Most research on the uncanny valley effect is concerned with the influence of human-likeness and realism as a trigger of an uncanny feeling in humans. There has been a lack of investigation on the effect of other dimensions, for example, gender. Back-projected robotic heads allow us to alter visual cues in the appearance of the robot in order to investigate how the perception of it changes. In this paper, we study the influence of gender on the perceived uncanniness. We conducted an experiment with 48 participants in which we used different modalities of interaction to change the strength of the gender cues in the robot. Results show that incongruence in the gender cues of the robot, and not its specific gender, influences the uncanniness of the back-projected robotic head. This finding has potential implications for both the perceptual mismatch and categorization ambiguity theory as a general explanation of the uncanny valley effect.

Place, publisher, year, edition, pages
Springer, 2016
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 9979
National Category
Robotics
Identifiers
urn:nbn:se:uu:diva-308416 (URN)10.1007/978-3-319-47437-3_39 (DOI)000389816500039 ()978-3-319-47436-6 (ISBN)978-3-319-47437-3 (ISBN)
Conference
ICSR 2016, November 1–3, Kansas City, MO
Available from: 2016-10-07 Created: 2016-11-25 Last updated: 2017-02-09Bibliographically approved
Paetzel, M., Peters, C., Nyström, I. & Castellano, G. (2016). Effects of multimodal cues on children's perception of uncanniness in a social robot. In: Proc. 18th ACM International Conference on Multimodal Interaction: . Paper presented at ICMI 2016, November 12–16, Tokyo, Japan (pp. 297-301).
Open this publication in new window or tab >>Effects of multimodal cues on children's perception of uncanniness in a social robot
2016 (English)In: Proc. 18th ACM International Conference on Multimodal Interaction, 2016, p. 297-301Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the influence of multimodal incongruent gender cues on the perception of a robot's uncanniness and gender in children. The back-projected robot head Furhat was equipped with a female and male face texture and voice synthesizer and the voice and facial cues were tested in congruent and incongruent combinations. 106 children between the age of 8 and 13 participated in the study. Results show that multimodal incongruent cues do not trigger the feeling of uncanniness in children. These results are significant as they support other recent research showing that the perception of uncanniness cannot be triggered by a categorical ambiguity in the robot. In addition, we found that children rely on auditory cues much stronger than on the facial cues when assigning a gender to the robot if presented with incongruent cues. These findings have implications for the robot design, as it seems possible to change the gender of a robot by only changing its voice without creating a feeling of uncanniness in a child.

Keywords
Uncanny valley, child-robot interaction, multimodal voice and facial expressions
National Category
Robotics
Identifiers
urn:nbn:se:uu:diva-308414 (URN)10.1145/2993148.2993157 (DOI)000390299900046 ()9781450345569 (ISBN)
Conference
ICMI 2016, November 12–16, Tokyo, Japan
Available from: 2016-10-31 Created: 2016-11-25 Last updated: 2017-02-07Bibliographically approved
Organisations

Search in DiVA

Show all publications