uu.seUppsala universitets publikationer
Ändra sökning
Avgränsa sökresultatet
13141516 751 - 792 av 792
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 751.
    Wieslander, Håkan
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Forslid, Gustav
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Deep Convolutional Neural Networks For Detecting Cellular Changes Due To Malignancy2017Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Discovering cancer at an early stage is an effective way to increase the chance of survival. However, since most screening processes are done manually it is time inefficient and thus costly. One way of automizing the screening process could be to classify cells using Convolutional Neural Networks. Convolutional Neural Networks have been proven to produce high accuracy for image classification tasks. This thesis investigates if Convolutional Neural Networks can be used as a tool to detect cellular changes due to malignancy in the oral cavity and uterine cervix. Two datasets containing oral cells and two datasets containing cervical cells were used. The cells were divided into normal and abnormal cells for a binary classification. The performance was evaluated for two different network architectures, ResNet and VGG. For the oral datasets the accuracy varied between 78-82% correctly classified cells depending on the dataset and network. For the cervical datasets the accuracy varied between 84-86% correctly classified cells depending on the dataset and network. These results indicates a high potential for classifying abnormalities for oral and cervical cells. ResNet was shown to be the preferable network, with a higher accuracy and a smaller standard deviation.

  • 752.
    Wieslander, Håkan
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Forslid, Gustav
    Bengtsson, Ewert
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Wählby, Carolina
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Hirsch, Jan-Michaél
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för kirurgiska vetenskaper, Käkkirurgi.
    Runow Stark, Christina
    Sadanandan, Sajith Kecheril
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Deep convolutional neural networks for detecting cellular changes due to malignancy2017Ingår i: Proc. 16th International Conference on Computer Vision Workshops, IEEE Computer Society, 2017, s. 82-89Konferensbidrag (Refereegranskat)
  • 753.
    Wilkinson, Tomas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Learning based Word Search and Visualisation for Historical Manuscript Images2019Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Today, work with historical manuscripts is nearly exclusively done manually, by researchers in the humanities as well as laypeople mapping out their personal genealogy. This is a highly time consuming endeavour as it is not uncommon to spend months with the same volume of a few hundred pages. The last few decades have seen an ongoing effort to digitise manuscripts, both preservation purposes and to increase accessibility. This has the added effect of enabling the use methods and algorithms from Image Analysis and Machine Learning that have great potential in both making existing work more efficient and creating new methodologies for manuscript-based research.

    The first part of this thesis focuses on Word Spotting, the task of searching for a given text query in a manuscript collection. This can be broken down into two tasks, detecting where the words are located on the page, and then ranking the words according to their similarity to a search query. We propose Deep Learning models to do both, separately and then simultaneously, and successfully search through a large manuscript collection consisting of over a hundred thousand pages.

    A limiting factor in applying learning-based methods to historical manuscript images is the cost, and therefore, lack of annotated data needed to train machine learning models. We propose several ways to mitigate this problem, including generating synthetic data, augmenting existing data to get better value from it, and learning from pre-existing, partially annotated data that was previously unusable.

    In the second part, a method for visualising manuscript collections called the Image-based Word Cloud is proposed. Much like it text-based counterpart, it arranges the most representative words in a collection into a cloud, where the size of the words are proportional to their frequency of occurrence. This grants a user a single image overview of a manuscript collection, regardless of its size. We further propose a way to estimate a manuscripts production date. This can grant historians context that is crucial for correctly interpreting the contents of a manuscript.

    Delarbeten
    1. Bootstrapping Weakly Supervised Segmentation-free Word Spotting through HMM-based Alignment
    Öppna denna publikation i ny flik eller fönster >>Bootstrapping Weakly Supervised Segmentation-free Word Spotting through HMM-based Alignment
    (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Recent work in word spotting in handwritten documents has yielded impressive results. Yet this progress has largely been made by supervised learning systems which are dependant on manually annotated data, making deployment to new collections a significant effort. In this paper we propose an approach utilising transcriptions without bounding box annotations to train segmentation-free word spotting models, given a model partially trained with full annotations. This is done through an alignment procedure based on hidden Markov models. This model can create a tentative mapping between word region proposals and the transcriptions to automatically create additional weakly annotated training data. Using as little as 1% and 10% of the fully annotated training sets for partial convergence, we automatically annotate the remaining training data and successfully train using it. Across all datasets, our approach comes within a few mAP% of achieving the same performance as a model trained with only full ground truth. We believe that this will be a significant advance towards a more general use of word spotting, since digital transcription data will already exist for parts of many collections of interest.

    Nyckelord
    weakly supervised, segmentation-free word spotting, convolutional neural network, hidden Markov model
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Forskningsämne
    Datoriserad bildbehandling
    Identifikatorer
    urn:nbn:se:uu:diva-381304 (URN)
    Projekt
    q2b
    Forskningsfinansiär
    Vetenskapsrådet, 2012-5743Riksbankens Jubileumsfond, NHS14-2068:1
    Tillgänglig från: 2019-04-07 Skapad: 2019-04-07 Senast uppdaterad: 2019-04-08
    2. Neural Word Search in Historical Manuscript Collections
    Öppna denna publikation i ny flik eller fönster >>Neural Word Search in Historical Manuscript Collections
    (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    We address the problem of segmenting and retrieving word images in collections of historical manuscripts given a text query. This is commonly referred to as "word spotting". To this end, we first propose an end-to-end trainable model based on deep neural networks that we dub Ctrl-F-Net. The model simultaneously generates region proposals and embeds them into a word embedding space, wherein a search is performed. We further introduce a simplified version called Ctrl-F-Mini. It is faster with similar performance, though it is limited to more easily segmented manuscripts. We evaluate both models on common benchmark datasets and surpass the previous state of the art. Finally, in collaboration with historians, we employ the Ctrl-F-Net to search within a large manuscript collection of over 100 thousand pages, written across two centuries. With only 11 training pages, we enable large scale data collection in manuscript-based historical research. This results in a speed up of data collection and the number of manuscripts processed by orders of magnitude. Given the time consuming manual work required to study old manuscripts in the humanities, quick and robust tools for word spotting has the potential to revolutionise domains like history, religion and language.

    Nyckelord
    Word spotting, Historical Manuscripts, Deep Convolutional Neural Network, Region Proposals
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Forskningsämne
    Datoriserad bildbehandling
    Identifikatorer
    urn:nbn:se:uu:diva-381306 (URN)
    Projekt
    q2b
    Forskningsfinansiär
    Vetenskapsrådet, 2012-5743Riksbankens Jubileumsfond, NHS14-2068:1
    Tillgänglig från: 2019-04-07 Skapad: 2019-04-07 Senast uppdaterad: 2019-04-08
    3. Neural Ctrl-F: Segmentation-free query-by-string word spotting in handwritten manuscript collections
    Öppna denna publikation i ny flik eller fönster >>Neural Ctrl-F: Segmentation-free query-by-string word spotting in handwritten manuscript collections
    2017 (Engelska)Ingår i: 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, 2017, s. 4443-4452Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    In this paper, we approach the problem of segmentation-free query-by-string word spotting for handwritten documents. In other words, we use methods inspired from computer vision and machine learning to search for words in large collections of digitized manuscripts. In particular, we are interested in historical handwritten texts, which are often far more challenging than modern printed documents. This task is important, as it provides people with a way to quickly find what they are looking for in large collections that are tedious and difficult to read manually. To this end, we introduce an end-to-end trainable model based on deep neural networks that we call Ctrl-F-Net. Given a full manuscript page, the model simultaneously generates region proposals, and embeds these into a distributed word embedding space, where searches are performed. We evaluate the model on common benchmarks for handwritten word spotting, outperforming the previous state-of-the-art segmentation-free approaches by a large margin, and in some cases even segmentation-based approaches. One interesting real-life application of our approach is to help historians to find and count specific words in court records that are related to women's sustenance activities and division of labor. We provide promising preliminary experiments that validate our method on this task.

    Ort, förlag, år, upplaga, sidor
    IEEE, 2017
    Serie
    IEEE International Conference on Computer Vision, E-ISSN 1550-5499
    Nyckelord
    Segmentation-free Word Spotting, Deep Learning, Convolutional Neural Network, Query-by-String
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Forskningsämne
    Datoriserad bildbehandling
    Identifikatorer
    urn:nbn:se:uu:diva-335926 (URN)10.1109/ICCV.2017.475 (DOI)000425498404054 ()978-1-5386-1032-9 (ISBN)
    Konferens
    16th IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October 22-29, 2017
    Projekt
    q2b
    Forskningsfinansiär
    Vetenskapsrådet, 2012-5743Riksbankens Jubileumsfond, NHS14-2068:1
    Tillgänglig från: 2017-12-11 Skapad: 2017-12-11 Senast uppdaterad: 2019-04-08Bibliografiskt granskad
    4. Visualizing document image collections using image-based word clouds
    Öppna denna publikation i ny flik eller fönster >>Visualizing document image collections using image-based word clouds
    2015 (Engelska)Ingår i: Advances in Visual Computing: 11th International Symposium, ISVC 2015, Las Vegas, NV, USA, December 14-16, 2015, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; Pavlidis, I; Feris, R; McGraw, T; Elendt, M; Kopper, R; Ragan, E; Ye, Z; Weber, G, Springer, 2015, s. 297-306Konferensbidrag, Publicerat paper (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    Springer, 2015
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 9474
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Forskningsämne
    Datoriserad bildbehandling
    Identifikatorer
    urn:nbn:se:uu:diva-272193 (URN)10.1007/978-3-319-27857-5_27 (DOI)000376400300027 ()9783319278568 (ISBN)9783319278575 (ISBN)
    Konferens
    ISVC 2015, December 14–16, Las Vegas, NV
    Projekt
    q2b
    Forskningsfinansiär
    Vetenskapsrådet, 2012-5743
    Tillgänglig från: 2015-12-18 Skapad: 2016-01-12 Senast uppdaterad: 2019-04-08Bibliografiskt granskad
    5. A novel word segmentation method based on object detection and deep learning
    Öppna denna publikation i ny flik eller fönster >>A novel word segmentation method based on object detection and deep learning
    2015 (Engelska)Ingår i: Advances in Visual Computing: 11th International Symposium, ISVC 2015, Las Vegas, NV, USA, December 14-16, 2015, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; Pavlidis, I; Feris, R; McGraw, T; Elendt, M; Kopper, R; Ragan, E; Ye, Z; Weber, G, Springer, 2015, s. 231-240Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    The segmentation of individual words is a crucial step in several data mining methods for historical handwritten documents. Examples of applications include visual searching for query words (word spotting) and character-by-character text recognition. In this paper, we present a novel method for word segmentation that is adapted from recent advances in computer vision, deep learning and generic object detection. Our method has unique capabilities and it has found practical use in our current research project. It can easily be trained for different kinds of historical documents, uses full gray scale information, does not require binarization as pre-processing or prior segmentation of individual text lines. We evaluate its performance using established error metrics, previously used in competitions for word segmentation, and demonstrate its usefulness for a 15th century handwritten document.

    Ort, förlag, år, upplaga, sidor
    Springer, 2015
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 9474
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Forskningsämne
    Datoriserad bildbehandling
    Identifikatorer
    urn:nbn:se:uu:diva-272181 (URN)10.1007/978-3-319-27857-5_21 (DOI)000376400300021 ()9783319278568 (ISBN)9783319278575 (ISBN)
    Konferens
    ISVC 2015, December 14–16, Las Vegas, NV
    Projekt
    q2b
    Forskningsfinansiär
    Vetenskapsrådet, 2012-5743
    Tillgänglig från: 2015-12-18 Skapad: 2016-01-12 Senast uppdaterad: 2019-04-08Bibliografiskt granskad
    6. Semantic and Verbatim Word Spotting using Deep Neural Networks
    Öppna denna publikation i ny flik eller fönster >>Semantic and Verbatim Word Spotting using Deep Neural Networks
    2016 (Engelska)Ingår i: Proceedings Of 2016 15Th International Conference On Frontiers In Handwriting Recognition (Icfhr), 2016, s. 307-312Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform wordspotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.

    Serie
    International Conference on Handwriting Recognition, ISSN 2167-6445
    Nyckelord
    handwritten word spotting, convolutional neural networks, deep learning, word embeddings
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Forskningsämne
    Datoriserad bildbehandling
    Identifikatorer
    urn:nbn:se:uu:diva-306667 (URN)10.1109/ICFHR.2016.60 (DOI)000400052400056 ()978-1-5090-0981-7 (ISBN)
    Konferens
    15th International Conference on Frontiers in Handwriting Recognition (ICFHR), October 23-26, 2016, Shenzhen, China.
    Projekt
    q2b
    Forskningsfinansiär
    Vetenskapsrådet, 2012-5743Riksbankens Jubileumsfond, NHS14-2068:1
    Tillgänglig från: 2016-11-01 Skapad: 2016-11-01 Senast uppdaterad: 2019-04-08
    7. Historical Manuscript Production Date Estimation using Deep Convolutional Neural Networks
    Öppna denna publikation i ny flik eller fönster >>Historical Manuscript Production Date Estimation using Deep Convolutional Neural Networks
    2016 (Engelska)Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    Deep learning has thus far not been used for dating of pre-modern handwritten documents. In this paper, we propose ways of using deep convolutional neural networks (CNNs) to estimate production dates for such manuscripts. In our approach, a CNN can either be used directly for estimating the production date or as a feature learning framework for other regression techniques. We explore the feature learning approach using Gaussian Processes regression and Support Vector Regression.The evaluation is performed on a unique large dataset of over 10000 medieval charters from the Swedish collection Svenskt Diplomatariums huvudkartotek (SDHK). We show that deep learning is applicable to the task of dating documents and that the performance is on average comparable to that of a human expert.

    Ort, förlag, år, upplaga, sidor
    IEEE, 2016
    Serie
    International Conference on Handwriting Recognition, ISSN 2167-6445
    Nyckelord
    Document analysis, Manuscripts, Document dating, Digital Humanities
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Forskningsämne
    Datoriserad bildbehandling
    Identifikatorer
    urn:nbn:se:uu:diva-306685 (URN)10.1109/ICFHR.2016.114 (DOI)000400052400039 ()978-1-5090-0981-7 (ISBN)
    Konferens
    International Conference on Frontiers in Handwriting Recognition (ICFHR), October 23-26, 2016, Shenzhen, China.
    Projekt
    q2bq2b_vr2012
    Forskningsfinansiär
    Vetenskapsrådet, 2012-5743Riksbankens Jubileumsfond, NHS14-2068:1
    Tillgänglig från: 2016-11-01 Skapad: 2016-11-01 Senast uppdaterad: 2019-04-08
    8. CalligraphyNet: Augmenting handwriting generation with quill based stroke width
    Öppna denna publikation i ny flik eller fönster >>CalligraphyNet: Augmenting handwriting generation with quill based stroke width
    2019 (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Realistic handwritten document generation garners a lot ofinterest from the document research community for its abilityto generate annotated data. In the current approach we haveused GAN-based stroke width enrichment and style transferbased refinement over generated data which result in realisticlooking handwritten document images. The GAN part of dataaugmentation transfers the stroke variation introduced by awriting instrument onto images rendered from trajectories cre-ated by tracking coordinates along the stylus movement. Thecoordinates from stylus movement are augmented with thelearned stroke width variations during the data augmentationblock. An RNN model is then trained to learn the variationalong the movement of the stylus along with the stroke varia-tions corresponding to an input sequence of characters. Thismodel is then used to generate images of words or sentencesgiven an input character string. A document image thus cre-ated is used as a mask to transfer the style variations of the inkand the parchment. The generated image can capture the colorcontent of the ink and parchment useful for creating annotated data.

    Nationell ämneskategori
    Datorsystem
    Forskningsämne
    Datoriserad bildbehandling
    Identifikatorer
    urn:nbn:se:uu:diva-379633 (URN)
    Konferens
    26th IEEE International Conference on Image Processing
    Anmärkning

    Currently under review

    Tillgänglig från: 2019-03-19 Skapad: 2019-03-19 Senast uppdaterad: 2019-04-08
  • 754.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    A novel word segmentation method based on object detection and deep learning2015Ingår i: Advances in Visual Computing: 11th International Symposium, ISVC 2015, Las Vegas, NV, USA, December 14-16, 2015, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; Pavlidis, I; Feris, R; McGraw, T; Elendt, M; Kopper, R; Ragan, E; Ye, Z; Weber, G, Springer, 2015, s. 231-240Konferensbidrag (Refereegranskat)
    Abstract [en]

    The segmentation of individual words is a crucial step in several data mining methods for historical handwritten documents. Examples of applications include visual searching for query words (word spotting) and character-by-character text recognition. In this paper, we present a novel method for word segmentation that is adapted from recent advances in computer vision, deep learning and generic object detection. Our method has unique capabilities and it has found practical use in our current research project. It can easily be trained for different kinds of historical documents, uses full gray scale information, does not require binarization as pre-processing or prior segmentation of individual text lines. We evaluate its performance using established error metrics, previously used in competitions for word segmentation, and demonstrate its usefulness for a 15th century handwritten document.

  • 755.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Experiments on Large Scale Document Visualization using Image-based Word Clouds2015Rapport (Övrigt vetenskapligt)
    Abstract [en]

    In this paper, we introduce image-based word clouds as a novel tool for a quick and aesthetic overviews of common words in collections of digitized text manuscripts. While OCR can be used to enable summaries and search functionality to printed modern text, historical and handwritten documents remains a challenge. By segmenting and counting word images, without applying manual transcription or OCR, we have developed a method that can produce word- or tag clouds from document collections. Our new tool is not limited to any specific kind of text. We make further contributions in ways of stop-word removal, class based feature weighting and visualization. An evaluation of the proposed tool includes comparisons with ground truth word clouds on handwritten marriage licenses from the 17th century and the George Washington database of handwritten letters, from the 18th century. Our experiments show that image-based word clouds capture the same information, albeit approximately, as the regular word clouds based on text data.

  • 756.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Semantic and Verbatim Word Spotting using Deep Neural Networks2016Ingår i: Proceedings Of 2016 15Th International Conference On Frontiers In Handwriting Recognition (Icfhr), 2016, s. 307-312Konferensbidrag (Refereegranskat)
    Abstract [en]

    In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform wordspotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.

  • 757.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Visualizing document image collections using image-based word clouds2015Ingår i: Advances in Visual Computing: 11th International Symposium, ISVC 2015, Las Vegas, NV, USA, December 14-16, 2015, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; Pavlidis, I; Feris, R; McGraw, T; Elendt, M; Kopper, R; Ragan, E; Ye, Z; Weber, G, Springer, 2015, s. 297-306Konferensbidrag (Refereegranskat)
  • 758.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindström, Jonas
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Historisk-filosofiska fakulteten, Historiska institutionen.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Neural Ctrl-F: Segmentation-free Query-by-String Word Spotting in Handwritten Manuscript Collections2017Konferensbidrag (Övrigt vetenskapligt)
  • 759.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindström, Jonas
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Historisk-filosofiska fakulteten, Historiska institutionen.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Neural Ctrl-F: Segmentation-free query-by-string word spotting in handwritten manuscript collections2017Ingår i: 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, 2017, s. 4443-4452Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we approach the problem of segmentation-free query-by-string word spotting for handwritten documents. In other words, we use methods inspired from computer vision and machine learning to search for words in large collections of digitized manuscripts. In particular, we are interested in historical handwritten texts, which are often far more challenging than modern printed documents. This task is important, as it provides people with a way to quickly find what they are looking for in large collections that are tedious and difficult to read manually. To this end, we introduce an end-to-end trainable model based on deep neural networks that we call Ctrl-F-Net. Given a full manuscript page, the model simultaneously generates region proposals, and embeds these into a distributed word embedding space, where searches are performed. We evaluate the model on common benchmarks for handwritten word spotting, outperforming the previous state-of-the-art segmentation-free approaches by a large margin, and in some cases even segmentation-based approaches. One interesting real-life application of our approach is to help historians to find and count specific words in court records that are related to women's sustenance activities and division of labor. We provide promising preliminary experiments that validate our method on this task.

  • 760.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Lindström, Jonas
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Historisk-filosofiska fakulteten, Historiska institutionen.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Neural Word Search in Historical Manuscript CollectionsManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    We address the problem of segmenting and retrieving word images in collections of historical manuscripts given a text query. This is commonly referred to as "word spotting". To this end, we first propose an end-to-end trainable model based on deep neural networks that we dub Ctrl-F-Net. The model simultaneously generates region proposals and embeds them into a word embedding space, wherein a search is performed. We further introduce a simplified version called Ctrl-F-Mini. It is faster with similar performance, though it is limited to more easily segmented manuscripts. We evaluate both models on common benchmark datasets and surpass the previous state of the art. Finally, in collaboration with historians, we employ the Ctrl-F-Net to search within a large manuscript collection of over 100 thousand pages, written across two centuries. With only 11 training pages, we enable large scale data collection in manuscript-based historical research. This results in a speed up of data collection and the number of manuscripts processed by orders of magnitude. Given the time consuming manual work required to study old manuscripts in the humanities, quick and robust tools for word spotting has the potential to revolutionise domains like history, religion and language.

  • 761.
    Wilkinson, Tomas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Nettelblad, Carl
    Bootstrapping Weakly Supervised Segmentation-free Word Spotting through HMM-based AlignmentManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Recent work in word spotting in handwritten documents has yielded impressive results. Yet this progress has largely been made by supervised learning systems which are dependant on manually annotated data, making deployment to new collections a significant effort. In this paper we propose an approach utilising transcriptions without bounding box annotations to train segmentation-free word spotting models, given a model partially trained with full annotations. This is done through an alignment procedure based on hidden Markov models. This model can create a tentative mapping between word region proposals and the transcriptions to automatically create additional weakly annotated training data. Using as little as 1% and 10% of the fully annotated training sets for partial convergence, we automatically annotate the remaining training data and successfully train using it. Across all datasets, our approach comes within a few mAP% of achieving the same performance as a model trained with only full ground truth. We believe that this will be a significant advance towards a more general use of word spotting, since digital transcription data will already exist for parts of many collections of interest.

  • 762.
    Wretstam, Oskar
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Infrared image-based modeling and rendering2017Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Bildbaserad modellering med visuella bilder har genomgått en stor utveckling under de tidigare delarna av 2000-talet. Givet en sekvens bestående av vanliga tvådimensionella bilder på en scen från olika perspektiv så är målet att rekonstruera en tredimensionell modell. I denna avhandling implementeras och testas ett system för automatiserad okalibrerad scenrekonstruktion från infraröda bilder. Okalibrerad rekonstruktion refererar till det faktum att parametrar för kameran, såsom fokallängd och fokus, är okända och enbart bilder används som indata till systemet. Ett stort användingsområde för värmekameror är inspektion. Temperaturskillnader i en bild kan indikera till exempel dålig isolering eller hög friktion. Om ett automatiserat system kan skapa en tredimensionell modell av en scen så kan det bidra till att förenkla inspektion samt till att ge en bättre överblick. Värmebilder kommer generellt att ha lägre upplösning, mindre kontrast och mindre högfrekvensinnehåll jämfört med visuella bilder. Dessa egenskaper hos infraröda bilder komplicerar extraktion och matchning av punkter i bilderna vilket är viktiga steg i rekonstruktionen. För att åtgärda komplikationen förbehandlas bilderna innan rekonstruktionen, ett urval av metoder för förbehandling har testats. Rekonstruktion med värmebilder kommer också att ställa ytterligare krav på rekonstruktionen, detta eftersom det är viktigt att bibehålla termisk noggrannhet från bilderna i modellen. Tre huvudresultat erhålls från denna avhandling. För det första är det möjligt att beräkna kamerakalibrering och position såväl som en gles rekonstruktion från en infraröd bildsekvens, detta med implementationen som föreslås i denna avhandling. För det andra presenteras och analyseras korrelationen för temperaturmätningar i bilderna som används för rekonstruktionen. Slutligen så visar den testade förbehandlingen inte en förbättring av rekonstruktionen som är propotionerlig med den ökade beräkningskomplexiteten.

  • 763.
    Wu, Chi-Chih
    et al.
    Uppsala universitet, Science for Life Laboratory, SciLifeLab. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Biologiska sektionen, Institutionen för ekologi och genetik, Evolutionsbiologi.
    Klaesson, Axel
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Farmaceutiska fakulteten, Institutionen för farmaceutisk biovetenskap.
    Buskas, Julia
    Uppsala universitet, Science for Life Laboratory, SciLifeLab. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Biologiska sektionen, Institutionen för ekologi och genetik, Evolutionsbiologi.
    Ranefall, Petter
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Mirzazadeh, Reza
    Söderberg, Ola
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Farmaceutiska fakulteten, Institutionen för farmaceutisk biovetenskap.
    Wolf, Jochen B. W.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Biologiska sektionen, Institutionen för ekologi och genetik, Evolutionsbiologi. Uppsala universitet, Science for Life Laboratory, SciLifeLab.
    In situ quantification of individual mRNA transcripts in melanocytes discloses gene regulation of relevance to speciation2019Ingår i: Journal of Experimental Biology, ISSN 0022-0949, E-ISSN 1477-9145, Vol. 222, nr 5Artikel i tidskrift (Refereegranskat)
  • 764.
    Wuttke, Anne
    et al.
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinsk cellbiologi.
    Gandasi, Nikhil
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinsk cellbiologi.
    Wählby, Carolina
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Science for Life Laboratory, SciLifeLab. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Barg, Sebastian
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinsk cellbiologi.
    Tengholm, Anders
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinsk cellbiologi.
    Continuous imaging of exocytosis in β-cells reveals negative feedback of insulinManuskript (preprint) (Övrigt vetenskapligt)
  • 765.
    Wählby, Carolina
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Science for Life Laboratory, SciLifeLab. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    High throughput phenotyping of model organisms2012Ingår i: BioImage Informatics 2012 / [ed] Fuhui Long, Ivo F. Sbalzarini, Pavel Tomancak and Michael Unser, Dresden, Germany, 2012, s. 45-45Konferensbidrag (Refereegranskat)
    Abstract [en]

    Microscopy has emerged as one of the most powerful and informative ways to analyze cell-based high-throughput screening samples in experiments designed to uncover novel drugs and drug targets. However, many diseases and biological pathways can be better studied in whole animals – particularly diseases that involve organ systems and multi-cellular interactions, such as metabolism, infection, vascularization, and development. Two model organisms compatible with high-throughput phenotyping are the 1mm long round worm C. elegans and the transparent embryo of zebrafish (Danio rerio). C. elegans is tractable as it can be handled using similar robotics, multi-well plates, and flow-sorting systems as are used for high-throughput screening of cells. The worm is also transparent throughout its lifecycle and is attractive as a model for genetic functions as its genes can be turned off by RNA-interference. Zebrafish embryos have also proved to be a vital model organism in many fields of research, including organismal development, cancer, and neurobiology. Zebrafish, being vertebrates, exhibit features common to phylogenetically higher organisms such as a true vasculature and central nervous system.

     

    Basically any phenotypic change that can be visually observed (in untreated or stained worms and fish) can also be imaged. However, visual assessment of phenotypic variation is tedious and prone to error as well as observer bias. Screening in high throughput limits image resolution and time-lapse information. Still, the images are typically rich in information and the number of images for a standard screen often exceeds 100 000, ruling out visual inspection. Generation of automated image analysis platforms will increase the throughout of data analysis, improve the robustness of phenotype scoring, and allow for reliable application of statistical metrics for evaluating assay performance and identifying active compounds.

     

    We have developed a platform for automated analysis of C. elegans assays, and are currently developing tools for analysis of zebrafish embryos. Our worm analysis tools, collected in the WormToolbox, can identify individual worms also as they cross and overlap, and quantify a large number of features, including mapping of reporter protein expression patterns to the worm anatomy. We have evaluated the tools on screens for novel treatments of infectious disease and genetic perturbations affecting fat metabolism. The WormToolbox is part of the free and open source CellProfiler software, also including methods for image assay quality control and feature selection by machine learning.

  • 766.
    Wählby, Carolina
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Science for Life Laboratory, SciLifeLab. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Image Segmentation, Processing and Analysis in Microscopy and Life Science2015Ingår i: Mathematical Models in Biology: Bringing Mathematics to Life, Springer, 2015, s. 1-16Kapitel i bok, del av antologi (Övrigt vetenskapligt)
  • 767.
    Wählby, Carolina
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion. Uppsala universitet, Science for Life Laboratory, SciLifeLab.
    Conery, Annie Lee
    Bray, Mark-Anthony
    Kamentsky, Lee
    Larkins-Ford, Jonah
    Sokolnicki, Katherine L.
    Veneskey, Matthew
    Michaels, Kerry
    Carpenter, Anne E.
    O'Rourke, Eyleen J.
    High- and low-throughput scoring of fat mass and body fat distribution in C. elegans2014Ingår i: Methods, ISSN 1046-2023, E-ISSN 1095-9130, Vol. 68, nr 3, s. 492-499Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Fat accumulation is a complex phenotype affected by factors such as neuroendocrine signaling, feeding, activity, and reproductive output. Accordingly, the most informative screens for genes and compounds affecting fat accumulation would be those carried out in whole living animals. Caenorhabditis elegans is a well-established and effective model organism, especially for biological processes that involve organ systems and multicellular interactions, such as metabolism. Every cell in the transparent body of C. elegans is visible under a light microscope. Consequently, an accessible and reliable method to visualize worm lipid-droplet fat depots would make C. elegans the only metazoan in which genes affecting not only fat mass but also body fat distribution could be assessed at a genome-wide scale. Here we present a radical improvement in oil red O worm staining together with high-throughput image-based phenotyping. The three-step sample preparation method is robust, formaldehyde-free, and inexpensive, and requires only 15 min of hands-on time to process a 96-well plate. Together with our free and user-friendly automated image analysis package, this method enables C. elegans sample preparation and phenotype scoring at a scale that is compatible with genome-wide screens. Thus we present a feasible approach to small-scale phenotyping and large-scale screening for genetic and/or chemical perturbations that lead to alterations in fat quantity and distribution in whole animals.

  • 768.
    Wählby, Carolina
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion. Uppsala universitet, Science for Life Laboratory, SciLifeLab.
    Kamentsky, Lee
    Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA.
    Liu, Zihan H
    Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA.
    Riklin-Raviv, Tammy
    Conery, Annie L
    Dept. of Molecular Biology and Center for Computational and Integrative Biology, Mass. General Hospital, Boston, MA.
    O'Rourke, Eyleen
    Sokolnicki, Katherine
    Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA.
    Visvikis, Orane
    Developmental Immunology Program, Dept. of Pediatrics, Mass. General Hospital, Boston, MA.
    Ljosa, Vebjorn
    Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA.
    Irazoqui, Javier E
    Developmental Immunology Program, Dept. of Pediatrics, Mass. General Hospital, Boston, MA.
    Golland, Polina
    Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA.
    Ruvkun, Gary
    Ausubel, Frederick M
    Dept. of Molecular Biology and Center for Computational and Integrative Biology, Mass. General Hospital, Boston, MA.
    Carpenter, Anne E
    Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA.
    An image analysis toolbox for high-throughput C. elegans assays2012Ingår i: Nature Methods, ISSN 1548-7091, E-ISSN 1548-7105, Vol. 9, nr 7, s. 714-716Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems from different laboratories. The toolbox is available via the open-source CellProfiler project and enables objective scoring of whole-animal high-throughput image-based assays using this unique model organism for the study of diverse biological pathways relevant to human disease.

  • 769.
    Yeh, Alexander
    et al.
    Chalmers University of Technology, Gothenburg, Sweden.
    Ratsamee, Photchara
    Osaka University, Osaka, Japan.
    Kiyokawa, Kiyoshi
    Nara Institute of Science and Technology (NAIST), Nara, Japan.
    Uranishi, Yuki
    Osaka University, Osaka, Japan.
    Mashita, Tomohiro
    Osaka University, Osaka, Japan.
    Takemura, Haruo
    Osaka University, Osaka, Japan.
    Fjeld, Morten
    Chalmers University of Technology, Gothenburg, Sweden.
    Obaid, Mohammad
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Exploring proxemics for human-drone interaction2017Ingår i: Proc. 5th International Conference on Human Agent Interaction, New York: ACM Press, 2017, s. 81-88Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a human-centered designed social drone aiming to be used in a human crowd environment. Based on design studies and focus groups, we created a prototype of a social drone with a social shape, face and voice for human interaction. We used the prototype for a proxemic study, comparing the required distance from the drone humans could comfortably accept compared with what they would require for a nonsocial drone. The social shaped design with greeting voice added decreased the acceptable distance markedly, as did present or previous pet ownership, and maleness. We also explored the proximity sphere around humans with a social shaped drone based on a validation study with variation of lateral distance and heights. Both lateral distance and the higher height of 1.8 m compared to the lower height of 1.2 m decreased the required comfortable distance as it approached.

  • 770.
    Zak, Edvard
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Including Smartphone End User Apps in the Context of the Company Contact Center2014Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Smartphones are becoming increasingly popular, with the result that customers prefer to carry out at least some customer services using an app on a mobile device. Among app users, smooth transfer to a live agent is seen as an important feature and this means that the company contact center need a solution to handle this as well as increasing numbers of interactions. The question this thesis tries to answer is "how can smartphone end user apps be included in the context of the company contact center"? To answer this question research was conducted regarding the possibilities of an Android smartphone, with the results of this research being used to define a use case, a state flow diagram and create a demonstration app. The thesis showed that it is possible to have an app as an online channel for customer service interactions. New possibilities in comparison to traditional telephony include that customer data such as topic, authentication, location and multimedia can be sent to the contact center before an actual interaction is started.

  • 771.
    Zhang, Hanqian
    et al.
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Dermatologi och venereologi.
    Ericsson, Maja
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Dermatologi och venereologi.
    Virtanen, Marie
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Dermatologi och venereologi.
    Weström, Simone
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Dermatologi och venereologi.
    Wählby, Carolina
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Vahlquist, Anders
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Dermatologi och venereologi.
    Törmä, Hans
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Dermatologi och venereologi.
    Quantitative image analysis of protein expression and colocalisation in skin sections2018Ingår i: Experimental dermatology, ISSN 0906-6705, E-ISSN 1600-0625, Vol. 27, nr 2, s. 196-199Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Immunofluorescence (IF) and in situ proximity ligation assay (isPLA) are techniques that are used for in situ protein expression and colocalisation analysis, respectively. However, an efficient quantitative method to analyse both IF and isPLA staining on skin sections is lacking. Therefore, we developed a new method for semi-automatic quantitative layer-by-layer measurement of protein expression and colocalisation in skin sections using the free open-source software CellProfiler. As a proof of principle, IF and isPLA of ichthyosis-related proteins TGm-1 and SDR9C7 were examined. The results indicate that this new method can be used for protein expression and colocalisation analysis in skin sections.

  • 772. Zhang, Peilin
    et al.
    Gao, Alex Yuan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Theel, Oliver
    Bandit learning with concurrent transmissions for energy-efficient flooding in sensor networks2018Ingår i: EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, ISSN 2410-0218, Vol. 4, nr 13, artikel-id e4Artikel i tidskrift (Refereegranskat)
  • 773. Zhang, Peilin
    et al.
    Gao, Alex Yuan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Theel, Oliver
    Less is More: Learning more with concurrent transmissions for energy-efficient flooding2017Ingår i: Proc. 14th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, New York: ACM Press, 2017Konferensbidrag (Refereegranskat)
  • 774. Zuluaga, Maria A.
    et al.
    Orkisz, Maciej
    Dong, Pei
    Pacureanu, Alexandra
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion. Uppsala universitet, Science for Life Laboratory, SciLifeLab.
    Gouttenoire, Pierre-Jean
    Peyrin, Françoise
    Bone canalicular network segmentation in 3D nano-CT images through geodesic voting and image tessellation2014Ingår i: Physics in Medicine and Biology, ISSN 0031-9155, E-ISSN 1361-6560, Vol. 59, nr 9, s. 2155-2171Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Recent studies emphasized the role of the bone lacuno-canalicular network (LCN) in the understanding of bone diseases such as osteoporosis. However, suitable methods to investigate this structure are lacking. The aim of this paper is to introduce a methodology to segment the LCN from three-dimensional (3D) synchrotron radiation nano-CT images. Segmentation of such structures is challenging due to several factors such as limited contrast and signal-to-noise ratio, partial volume effects and huge number of data that needs to be processed, which restrains user interaction. We use an approach based on minimum-cost paths and geodesic voting, for which we propose a fully automatic initialization scheme based on a tessellation of the image domain. The centroids of pre-segmented lacunae are used as Voronoi-tessellation seeds and as start-points of a fast-marching front propagation, whereas the end-points are distributed in the vicinity of each Voronoi-region boundary. This initialization scheme was devised to cope with complex biological structures involving cells interconnected by multiple thread-like, branching processes, while the seminal geodesic-voting method only copes with tree-like structures. Our method has been assessed quantitatively on phantom data and qualitatively on real datasets, demonstrating its feasibility. To the best of our knowledge, presented 3D renderings of lacunae interconnected by their canaliculi were achieved for the first time.

  • 775.
    Ärleryd, Sebastian
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Realtime Virtual 3D Image of Kidney Using Pre-Operative CT Image for Geometry and Realtime US-Image for Tracking2014Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In this thesis a method is presented to provide a 3D visualization of the human kidney and surrounding tissue during kidney surgery. The method takes advantage of the high detail of 3D X-Ray Computed Tomography (CT) and the high time resolution of Ultrasonography (US). By extracting the geometry from a single preoperative CT scan and animating the kidney by tracking its position in real time US images, a 3D visualization of the surgical volume can be created. The first part of the project consisted of building an imaging phantom as a simplified model of the human body around the kidney. It consists of three parts: the shell part representing surrounding tissue, the kidney part representing the kidney soft tissue and a kidney stone part embedded in the kidney part. The shell and soft tissue kidney parts was cast with a mixture of the synthetic polymer Polyvinyl Alchohol (PVA) and water. The kidney stone part was cast with epoxy glue. All three parts where designed to look like human tissue in CT and US images. The method is a pipeline of stages that starts with acquiring the CT image as a 3D matrix of intensity values. This matrix is then segmented, resulting in separate polygonal 3D models for the three phantom parts. A scan of the model is then performed using US, producing a sequence of US images. A computer program extracts easily recognizable image feature points from the images in the sequence. Knowing the spatial position and orientation of a new US image in which these features can be found again allows the position of the kidney to be calculated. The presented method is realized as a proof of concept implementation of the pipeline. The implementation displays an interactive visualization where the kidney is positioned according to a user-selected US image scanned for image features. Using the proof of concept implementation as a guide, the accuracy of the proposed method is estimated to be bounded by the acquired image data. For high resolution CT and US images, the accuracy can be in the order of a few millimeters. 

  • 776.
    Åberg, Anna Cristina
    et al.
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för folkhälso- och vårdvetenskap, Geriatrik.
    Halvorsen, Kjartan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för systemteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Reglerteknik.
    From, Ingrid
    Dalarna Univ, Sch Educ Hlth & Social Studies, SE-79188 Falun, Sweden.
    Bergman Bruhn, Åsa
    Dalarna Univ, Sch Educ Hlth & Social Studies, SE-79188 Falun, Sweden.
    Oestreicher, Lars
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Melander-Wikman, Anita
    Lulea Univ Technol, Div Hlth & Rehab, Dept Hlth Sci, SE-97187 Lulea, Sweden.
    A study protocol for applying user participation and co-learning: Lessons learned from the eBalance project2017Ingår i: International Journal of Environmental Research and Public Health, ISSN 1661-7827, E-ISSN 1660-4601, Vol. 14, nr 5, artikel-id 512Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The eBalance project is based on the idea that serious exergames-i.e., computer gaming systems with an interface that requires physical exertion to play-that are well adapted to users, can become a substantial part of a solution to recognized problems of insufficient engagement in fall-prevention exercise and the high levels of fall-related injuries among older people. This project is carried out as a collaboration between eight older people who have an interest in balance training and met the inclusion criteria of independence in personal activities of daily living, access to and basic knowledge of a computer, four staff working with the rehabilitation of older adults, and an interdisciplinary group of six research coordinators covering the areas of geriatric care and rehabilitation, as well as information technology and computer science. This paper describes the study protocol of the project's initial phase which aims to develop a working partnership with potential users of fall-prevention exergames, including its conceptual underpinnings. The qualitative methodology was inspired by an ethnographical approach implying combining methods that allowed the design to evolve through the study based on the participants' reflections. A participatory and appreciative action and reflection (PAAR) approach, accompanied by inquiries inspired by the Normalization Process Theory (NPT) was used in interactive workshops, including exergame testing, and between workshop activities. Data were collected through audio recordings, photos, and different types of written documentation. The findings provide a description of the methodology thus developed and applied. They display a methodology that can be useful for the design and development of care service and innovations for older persons where user participation is in focus.

  • 777. Åhlén, Julia
    et al.
    Seipel, Stefan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Automatic Water Body Extraction From Remote Sensing Images Using Entropy2015Ingår i: SGEM2015 Conference Proceedings, 2015, Vol. 2, s. 517-524Konferensbidrag (Refereegranskat)
    Abstract [en]

    This research focuses on automatic extraction of river banks and other inland waters from remote sensing images. There are no up to date accessible databases of rivers and most of other waters objects for modelling purposes. The main reason for that is that some regions are hard to access with the traditional ground through techniques and thus the boundary of river banks are uncertain in many geographical positions. The other reason is the limitations of widely applied method for extraction of water bodies called normalized-difference water index (NDWI). There is a novel approach to extract water bodies, which is based on pixel level variability or entropy, however, the methods work somewhat satisfactory on high spatial resolution images, there is no verification of the method performance on moderate or low resolution images. Problems encounter identification of mixed water pixels and e.g. roads, which are built in attachment to river banks and thus can be classified as rivers. In this work we propose an automatic extraction of river banks using image entropy, combined with NDWI identification. In this study only moderate spatial resolution Landsat TM are tested. Areas of interest include both major river banks and inland lakes. Calculating entropy on such poor spatial resolution images will lead to misinterpretation of water bodies, which all exhibits the same small variation of pixel values as e.g. some open or urban areas. Image entropy thus is calculated with the modification that involves the incorporation of local normalization index or variability coefficient. NDWI will produce an image where clear water exhibits large difference comparing to other land features. We are presenting an algorithm that uses an NDWI prior to entropy processing, so that bands used to calculate it, are chosen in clear connection to water body features that are clearly discernible.As a result we visualize a clear segmentation of the water bodies from the remote sensing images and verify the coordinates with a given geographic reference.

  • 778.
    Åhlén, Julia
    et al.
    Högskolan i Gävle.
    Seipel, Stefan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Knowledge Based Single Building Extraction and Recognition2014Ingår i: Proceedings WSEAS International Conference on Computer Engineering and Applications, 2014, 2014, s. 29-35Konferensbidrag (Refereegranskat)
    Abstract [en]

    Building facade extraction is the primary step in the recognition process in outdoor scenes. It is also achallenging task since each building can be viewed from different angles or under different lighting conditions. Inoutdoor imagery, regions, such as sky, trees, pavement cause interference for a successful building facade recognition.In this paper we propose a knowledge based approach to automatically segment out the whole facade or majorparts of the facade from outdoor scene. The found building regions are then subjected to recognition process. Thesystem is composed of two modules: segmentation of building facades region module and facade recognition module.In the facade segmentation module, color processing and objects position coordinates are used. In the facaderecognition module, Chamfer metrics are applied. In real time recognition scenario, the image with a building isfirst analyzed in order to extract the facade region, which is then compared to a database with feature descriptors inorder to find a match. The results show that the recognition rate is dependent on a precision of building extractionpart, which in turn, depends on a homogeneity of colors of facades.

  • 779. Åhlén, Julia
    et al.
    Seipel, Stefan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Mapping of roof types in orthophotos using feature descriptors2018Ingår i: Proc. International Multidisciplinary Scientific GeoConference: SGEM 2018, 2018, s. 285-291Konferensbidrag (Refereegranskat)
  • 780.
    Åhlén, Julia
    et al.
    Akademi för Teknik och Miljö, Högskolan i Gävle.
    Seipel, Stefan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Segmentation of shadows and water bodies in high resolution images using ancillary data2016Ingår i: Proc. 16th International Multidisciplinary Scientific GeoConference, 2016, Vol. 1, s. 827-834Konferensbidrag (Refereegranskat)
    Abstract [en]

    High spatial resolution imagery is often affected by shadows, both in urban environments with large variations in surface elevation and in vegetated areas. It is a common bias in classification when waters and shadows are registered as the same area. The radiometric response for the shadowed regions should be restored prior to classification. To enable that, separate classes of non-shadowed regions and shadowed areas should be created. Previous work on water extraction using low/medium resolution images, mainly faced two difficulties. Firstly, it is difficult to obtain accurate position of water boundary and secondly, shadows of elevated objects e.g. buildings, bridges, towers and trees are a typical source of noise when facing water extraction in urban regions. In high resolution images the problem of separation water and shadows becomes more prominent since the small local variation of intensity values gives rise to misclassification. This paper proposes a robust method for separation of shadowed areas and water bodies in high spatial resolution imagery using hierarchical method on different scales combined with classification of PCA (Principal Component Analysis) bands, which reduces the effects of radiometric and spatial differences that is commonly associated with the pixel-based methods for multisource data fusion. The method uses ancillary data to aid in classification of shadows and waters. The proposed method includes three steps: segmentation, classification and postprocessing. To achieve robust segmentation, we apply the merging region with three features (PCA bands, NSVDI (Normalized Saturation-value Difference Index) and height data). NSVDI discriminates shadows and some water. In the second step we use hierarchic region based classification to identify water regions. After that step candidates for water pixels are verified by the LiDAR DEM data. As a last step we consider shape parameters such as compactness and symmetry to completely remove shadows.

  • 781.
    Åhlén, Julia
    et al.
    Högskolan i Gävle.
    Seipel, Stefan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    TIME-SPACE VISUALISATION OF AMUR RIVER CHANNEL CHANGES DUE TO FLOODING DISASTER2014Ingår i: Proceedings of International Multidisciplinary Scientific GeoScience Conference (SGEM), 2014, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    The analysis of flooding levels is a highly complex temporal and spatial assessment task that involves estimation of distances between references in geographical space as well as estimations of instances along the time-line that coincide with given spatial locations. This work has an aim to interactively explore changes of Amur River boundaries caused by the severe flooding in September 2013. In our analysis of river bank changes we use satellite imagery (Landsat 7) to extract parts belonging to Amur River. We use imagery from that covers time interval July 2003 until February 2014. Image data is pre-processed using low level image processing techniques prior to visualization. Pre-processing has a purpose to extract information about the boundaries of the river, and to transform it into a vectorized format, suitable as inputs subsequent visualization. We develop visualization tools to explore the spatial and temporal relationship in the change of river banks. In particular the visualization shall allow for exploring specific geographic locations and their proximity to the river/floods at arbitrary times. We propose a time space visualization that emanates from edge detection, morphological operations and boundary statistics on Landsat 2D imagery in order to extract the borders of Amur River. For the visualization we use the time-spacecube metaphor. It is based on a 3D rectilinear context, where the 2D geographical coordinate system is extended with a time-axis pointing along the 3rd Cartesian axis. Such visualization facilitates analysis of the channel shape of Amur River and thus enabling for a conclusion regarding the defined problem. As a result we demonstrate our time-space visualization for river Amur and using some amount of geographical point data as a reference we suggest an adequate method of interpolation or imputation that can be employed to estimate value at a given location and time.

  • 782.
    Åhlén, Julia
    et al.
    Högskolan i Gävle.
    Seipel, Stefan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Liu, Fei
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Evaluation of the Automatic methods for Building Extraction2014Ingår i: International Journal of Computers and Communications, ISSN 2074-1294, Vol. 8, s. 171-176Artikel i tidskrift (Refereegranskat)
  • 783.
    Öfverstedt, Johan
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindblad, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Serbian Acad Arts & Sci, Math Inst, Belgrade 11001, Serbia.
    Sladoje, Natasa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion. Serbian Acad Arts & Sci, Math Inst, Belgrade 11001, Serbia.
    Fast and Robust Symmetric Image Registration Based on Distances Combining Intensity and Spatial Information2019Ingår i: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 28, nr 7, s. 3584-3597Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Intensity-based image registration approaches rely on similarity measures to guide the search for geometric correspondences with high affinity between images. The properties of the used measure are vital for the robustness and accuracy of the registration. In this study a symmetric, intensity interpolation-free, affine registration framework based on a combination of intensity and spatial information is proposed. The excellent performance of the framework is demonstrated on a combination of synthetic tests, recovering known transformations in the presence of noise, and real applications in biomedical and medical image registration, for both 2D and 3D images. The method exhibits greater robustness and higher accuracy than similarity measures in common use, when inserted into a standard gradientbased registration framework available as part of the open source Insight Segmentation and Registration Toolkit (ITK). The method is also empirically shown to have a low computational cost, making it practical for real applications. Source code is available.

  • 784.
    Öfverstedt, Johan
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindblad, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Sladoje, Natasa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Fast and Robust Symmetric Image Registration Based on Intensity and Spatial Information2018Manuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Intensity-based image registration approaches rely on similarity measures to guide the search for geometric correspondences with high affinity between images. The properties of the used measure are vital for the robustness and accuracy of the registration. In this study a symmetric, intensity interpolation-free, affine registration framework based on a combination of intensity and spatial information is proposed. The excellent performance of the framework is demonstrated on a combination of synthetic tests, recovering known transformations in the presence of noise, and real applications in biomedical and medical image registration, for both 2D and 3D images. The method exhibits greater robustness and higher accuracy than similarity measures in common use, when inserted into a standard gradient-based registration framework available as part of the open source Insight Segmentation and Registration Toolkit (ITK). The method is also empirically shown to have a low computational cost, making it practical for real applications. Source code is available.

  • 785.
    Öfverstedt, Johan
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindblad, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Sladoje, Natasa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Robust Symmetric Affine Image Registration2019Ingår i: Swedish Symposium on Image Analysis, 2019Konferensbidrag (Övrigt vetenskapligt)
  • 786.
    Öfverstedt, Johan
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindblad, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Sladoje, Natasa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Stochastic Distance Functions with Applications in Object Detection and Image Segmentation2019Ingår i: Swedish Symposium on Image Analysis, 2019Konferensbidrag (Övrigt vetenskapligt)
  • 787.
    Öfverstedt, Johan
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindblad, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Sladoje, Natasa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Stochastic Distance Transform2019Ingår i: Discrete Geometry for Computer Imagery, Springer, 2019, s. 75-86Konferensbidrag (Refereegranskat)
    Abstract [en]

    The distance transform (DT) and its many variations are ubiquitous tools for image processing and analysis. In many imaging scenarios, the images of interest are corrupted by noise. This has a strong negative impact on the accuracy of the DT, which is highly sensitive to spurious noise points. In this study, we consider images represented as discrete random sets and observe statistics of DT computed on such representations. We, thus, define a stochastic distance transform (SDT), which has an adjustable robustness to noise. Both a stochastic Monte Carlo method and a deterministic method for computing the SDT are proposed and compared. Through a series of empirical tests, we demonstrate that the SDT is effective not only in improving the accuracy of the computed distances in the presence of noise, but also in improving the performance of template matching and watershed segmentation of partially overlapping objects, which are examples of typical applications where DTs are utilized.

  • 788.
    Öfverstedt, Johan
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Sladoje, Natasa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindblad, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Distance between vector-valued fuzzy sets based on intersection decomposition with applications in object detection2017Ingår i: Mathematical Morphology and its Applications to Signal and Image Processing, Springer, 2017, Vol. 10225, s. 395-407Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a novel approach to measuring distance between multi-channel images, suitably represented by vector-valued fuzzy sets. We first apply the intersection decomposition transformation, based on fuzzy set operations, to vector-valued fuzzy representations to enable preservation of joint multi-channel properties represented in each pixel of the original image. Distance between two vector-valued fuzzy sets is then expressed as a (weighted) sum of distances between scalar-valued fuzzy components of the transformation. Applications to object detection and classification on multi-channel images and heterogeneous object representations are discussed and evaluated subject to several important performance metrics. It is confirmed that the proposed approach outperforms several alternative single-and multi-channel distance measures between information-rich image/ object representations.

  • 789.
    Öfverstedt, Johan
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Sladoje, Natasa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Lindblad, Joakim
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Distance Between Vector-valued Images based on Intersection Decomposition with Applications in Object Detection2018Ingår i: Swedish Symposium on Image Analysis, 2018Konferensbidrag (Övrigt vetenskapligt)
  • 790.
    Öfverstedt, Linn
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Why go headless – a comperative study between traditional CMS and the emerging headless trend2018Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    There has been an exponential increase in the number of websites, digital channels and consequently digital content in the last years. Not only are the number of websites increasing but they are also becoming more complex, therefore it is no longer feasible to handle content and code with the same tools. Content Management Systems (CMS) are the solution to this problem and offers a way of managing content. The market today offers a broad variety of solutions that each have their own advantages, one of the more common being WYSWYG-functionality which often means that the functionality and the presentation of the content are tightly coupled. "Headless" CMS are a new way of doing things and offers the user a way of managing content without presenting them with a way of displaying the content. The different types of CMS present advantages and disadvantages from a user centred point of view as well as from a technical one. The thesis aims to explore these perspectives and form a hypothesis based on the studied cases. The study presents a set of aspects that based on the context in which the CMS is used and implemented can be perceived as either advantages or disadvantages. "Headless" CMS however shows a tendency to be the preferable choice where the editors have a technical background and the developing part values an agnostic approach when implementing a CMS, whereas a traditional CMS with WYSIWYG functionality tends to be more favourable where stability and editorial freedom are valued.

  • 791.
    Öhrn, Håkan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    General image classifier for fluorescence microscopy using transfer learning2019Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Modern microscopy and automation technologies enable experiments which can produce millions of images each day. The valuable information is often sparse, and requires clever methods to find useful data. In this thesis a general image classification tool for fluorescence microscopy images was developed usingfeatures extracted from a general Convolutional Neural Network (CNN) trained on natural images. The user selects interesting regions in a microscopy image and then, through an iterative process, using active learning, continually builds a training data set to train a classifier that finds similar regions in other images. The classifier uses conformal prediction to find samples that, if labeled, would most improve the learned model as well as specifying the frequency of errors the classifier commits. The result show that with the appropriate choice of significance one can reach a high confidence in true positive. The active learning approach increased the precision with a downside of finding fewer examples.

  • 792.
    Ćurić, Vladimir
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Distance Functions and Their Use in Adaptive Mathematical Morphology2014Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    One of the main problems in image analysis is a comparison of different shapes in images. It is often desirable to determine the extent to which one shape differs from another. This is usually a difficult task because shapes vary in size, length, contrast, texture, orientation, etc. Shapes can be described using sets of points, crisp of fuzzy. Hence, distance functions between sets have been used for comparing different shapes.

    Mathematical morphology is a non-linear theory related to the shape or morphology of features in the image, and morphological operators are defined by the interaction between an image and a small set called a structuring element. Although morphological operators have been extensively used to differentiate shapes by their size, it is not an easy task to differentiate shapes with respect to other features such as contrast or orientation. One approach for differentiation on these type of features is to use data-dependent structuring elements.

    In this thesis, we investigate the usefulness of various distance functions for: (i) shape registration and recognition; and (ii) construction of adaptive structuring elements and functions.

    We examine existing distance functions between sets, and propose a new one, called the Complement weighted sum of minimal distances, where the contribution of each point to the distance function is determined by the position of the point within the set. The usefulness of the new distance function is shown for different image registration and shape recognition problems. Furthermore, we extend the new distance function to fuzzy sets and show its applicability to classification of fuzzy objects.

    We propose two different types of adaptive structuring elements from the salience map of the edge strength: (i) the shape of a structuring element is predefined, and its size is determined from the salience map; (ii) the shape and size of a structuring element are dependent on the salience map. Using this salience map, we also define adaptive structuring functions. We also present the applicability of adaptive mathematical morphology to image regularization. The connection between adaptive mathematical morphology and Lasry-Lions regularization of non-smooth functions provides an elegant tool for image regularization.

    Delarbeten
    1. On set distances and their application to image registration
    Öppna denna publikation i ny flik eller fönster >>On set distances and their application to image registration
    2009 (Engelska)Ingår i: Proc. 6th International Symposium on Image and Signal Processing and Analysis, Salzburg, Austria: IEEE , 2009, s. 449-454Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    In this paper we study set distances that are used in image processing. We propose a generalization of Sum of minimal distances and show that its special cases include a metric by Symmetric difference. The Hausdorff metric and the Chamfer matching distances are also closely related with the presented framework. In addition, we define the Complement set distance of a given distance. We evaluate the observed distance with respect to applicability to image object registration. We perform comparative evaluations with respect to noise sensitivity, as well as with respect to rigid body transformations. We conclude that the family of Generalized sum of minimal distances has many desirable properties for this application.

    Ort, förlag, år, upplaga, sidor
    Salzburg, Austria: IEEE, 2009
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Forskningsämne
    Datoriserad bildanalys; Datoriserad bildbehandling
    Identifikatorer
    urn:nbn:se:uu:diva-110684 (URN)10.1109/ISPA.2009.5297672 (DOI)978-953-184-135-1 (ISBN)
    Konferens
    6th International Symposium on Image and Signal Processing and Analysis, Salzburg, Austria, 16-18 September, 2009
    Tillgänglig från: 2009-11-26 Skapad: 2009-11-23 Senast uppdaterad: 2018-12-18
    2. A new set distance and its application to shape registration
    Öppna denna publikation i ny flik eller fönster >>A new set distance and its application to shape registration
    Visa övriga...
    2014 (Engelska)Ingår i: Pattern Analysis and Applications, ISSN 1433-7541, E-ISSN 1433-755X, Vol. 17, nr 1, s. 141-152Artikel i tidskrift (Refereegranskat) Published
    Nationell ämneskategori
    Diskret matematik
    Identifikatorer
    urn:nbn:se:uu:diva-220413 (URN)10.1007/s10044-012-0290-x (DOI)000330839400011 ()
    Tillgänglig från: 2012-08-23 Skapad: 2014-03-13 Senast uppdaterad: 2018-12-18Bibliografiskt granskad
    3. Distance measures between digital fuzzy objects and their applicability in image processing
    Öppna denna publikation i ny flik eller fönster >>Distance measures between digital fuzzy objects and their applicability in image processing
    2011 (Engelska)Ingår i: Combinatorial Image Analysis / [ed] Jake Aggarwal, Reneta Barneva, Valentin Brimkov, Kostadin Koroutchev, Elka Koroutcheva, Springer Berlin/Heidelberg, 2011, s. 385-397Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    We present two different extensions of the Sum of minimal distances and the Complement weighted sum of minimal distances to distances between fuzzy sets. We evaluate to what extent the proposed distances show monotonic behavior with respect to increasing translation and rotation of digital objects, in noise free, as well as in noisy conditions. Tests show that one of the extension approaches leads to distances exhibiting very good performance. Furthermore, we evaluate distance based classification of crisp and fuzzy representations of objects at a range of resolutions. We conclude that the proposed distances are able to utilize the additional information available in a fuzzy representation, thereby leading to improved performance of related image processing tasks.

    Ort, förlag, år, upplaga, sidor
    Springer Berlin/Heidelberg, 2011
    Serie
    Lecture Notes in Computer Science ; 6636
    Nyckelord
    Fuzzy sets, set distance, registration, classification
    Nationell ämneskategori
    Datorseende och robotik (autonoma system) Diskret matematik
    Forskningsämne
    Datoriserad bildanalys; Datoriserad bildbehandling
    Identifikatorer
    urn:nbn:se:uu:diva-157186 (URN)10.1007/978-3-642-21073-0_34 (DOI)978-3-642-21072-3 (ISBN)
    Konferens
    Internatiional Workshop on Combinatorial Image Analysis, IWCIA 2011
    Tillgänglig från: 2011-08-18 Skapad: 2011-08-18 Senast uppdaterad: 2018-12-18
    4. Salience adaptive structuring elements
    Öppna denna publikation i ny flik eller fönster >>Salience adaptive structuring elements
    2012 (Engelska)Ingår i: IEEE Journal on Selected Topics in Signal Processing, ISSN 1932-4553, E-ISSN 1941-0484, Vol. 6, nr 7, s. 809-819Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Spatially adaptive structuring elements adjust their shape to the local structures in the image, and are often defined by a ball in a geodesic distance or gray-weighted distance metric space. This paper introduces salience adaptive structuring elements as spatially variant structuring elements that modify not only their shape, but also their size according to the salience of the edges in the image. Morphological operators with salience adaptive structuring elements shift edges with high salience to a less extent than those with low salience. Salience adaptive structuring elements are less flexible than morphological amoebas and their shape is less affected by noise in the image. Consequently, morphological operators using salience adaptive structuring elements have better properties.

    Nyckelord
    Adaptive mathematical morphology, anisotropic filtering, morphological amoebas, salience distance transform
    Nationell ämneskategori
    Annan matematik
    Identifikatorer
    urn:nbn:se:uu:diva-181248 (URN)10.1109/JSTSP.2012.2207371 (DOI)000310138400007 ()
    Tillgänglig från: 2012-09-20 Skapad: 2012-09-20 Senast uppdaterad: 2017-12-07Bibliografiskt granskad
    5. Adaptive structuring elements based on salience information
    Öppna denna publikation i ny flik eller fönster >>Adaptive structuring elements based on salience information
    2012 (Engelska)Ingår i: Computer Vision and Graphics / [ed] L. Bolc, K. Wojciechowski, R. Tadeusiewicz, L.J. Chmielewski, Springer, 2012, s. 321-328Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    Adaptive structuring elements modify their shape and size according to the image content and may outperform fixed structuring elements. Without any restrictions, they suffer from a high computational complexity, which is often higher than linear with respect to the number of pixels in the image. This paper introduces adaptive structuring elements that have predefined shape, but where the size is adjusted to the local image structures. The size of adaptive structuring elements is determined by the salience map that corresponds to the salience of the edges in the image, which can be computed in linear time. We illustrate the difference between the new adaptive structuring elements and morphological amoebas. As an example of its usefulness, we show how the new adaptive morphological operations can isolate the text in historical documents.

    Ort, förlag, år, upplaga, sidor
    Springer, 2012
    Serie
    Lecture Notes in Computer Science, ISSN 03029743 ; 7594
    Nationell ämneskategori
    Annan matematik Annan data- och informationsvetenskap
    Identifikatorer
    urn:nbn:se:uu:diva-181246 (URN)10.1007/978-3-642-33564-8-39 (DOI)000313005700039 ()978-3-642-33564-8 (ISBN)
    Konferens
    International Conference on Computer Vision and Graphics, September 24-26, 2012, Warsaw, Poland
    Tillgänglig från: 2012-09-20 Skapad: 2012-09-20 Senast uppdaterad: 2018-01-12Bibliografiskt granskad
    6. Salience-Based Parabolic Structuring Functions
    Öppna denna publikation i ny flik eller fönster >>Salience-Based Parabolic Structuring Functions
    2013 (Engelska)Ingår i: Mathematical Morphology and Its Applications to Signal and Image Processing, Springer Berlin/Heidelberg, 2013, s. 183-194Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    It has been shown that the use of the salience map based on the salience distance transform can be useful for the construction of spatially adaptive structuring elements. In this paper, we propose salience-based parabolic structuring functions that are defined for a fixed, predefined spatial support, and have low computational complexity. In addition, we discuss how to properly define adjunct morphological operators using the new spatially adaptive structuring functions. It is also possible to obtain flat adaptive structuring elements by thresholding the salience-based parabolic structuring functions.

    Ort, förlag, år, upplaga, sidor
    Springer Berlin/Heidelberg, 2013
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 7883
    Nationell ämneskategori
    Annan matematik
    Forskningsämne
    Matematik med inriktning mot tillämpad matematik; Datoriserad bildanalys
    Identifikatorer
    urn:nbn:se:uu:diva-204715 (URN)10.1007/978-3-642-38294-9_16 (DOI)978-3-642-38293-2 (ISBN)
    Konferens
    11th International Symposium on Mathematical Morphology
    Tillgänglig från: 2013-08-09 Skapad: 2013-08-09 Senast uppdaterad: 2014-04-29Bibliografiskt granskad
    7. Morphological image regularization using adaptive structuring functions
    Öppna denna publikation i ny flik eller fönster >>Morphological image regularization using adaptive structuring functions
    (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
    Nationell ämneskategori
    Annan matematik
    Identifikatorer
    urn:nbn:se:uu:diva-221161 (URN)
    Tillgänglig från: 2014-03-25 Skapad: 2014-03-25 Senast uppdaterad: 2014-04-29
    8. Adaptive Mathematical Morphology: a survey of the field
    Öppna denna publikation i ny flik eller fönster >>Adaptive Mathematical Morphology: a survey of the field
    2014 (Engelska)Ingår i: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 47, s. 18-28Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    We present an up-to-date survey on the topic of adaptive mathematical morphology. A broad review of research performed within the field is provided, as well as an in-depth summary of the theoretical advances within the field. Adaptivity can come in many different ways, based on different attributes, measures, and parameters. Similarities and differences between a few selected methods for adaptive structuring elements are considered, providing perspective on the consequences of different types of adaptivity. We also provide a brief analysis of perspectives and trends within the field, discussing possible directions for future studies.

    Nyckelord
    Overview, Mathematical morphology, Adaptive morphology, Adaptive structuring elements, Adjunction property
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Identifikatorer
    urn:nbn:se:uu:diva-221159 (URN)10.1016/j.patrec.2014.02.022 (DOI)000339999200003 ()
    Tillgänglig från: 2014-03-18 Skapad: 2014-03-25 Senast uppdaterad: 2018-01-11Bibliografiskt granskad
13141516 751 - 792 av 792
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf