Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Semantic and Verbatim Word Spotting using Deep Neural Networks
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
2016 (English)In: Proceedings Of 2016 15Th International Conference On Frontiers In Handwriting Recognition (Icfhr), 2016, p. 307-312Conference paper, Published paper (Refereed)
Abstract [en]

In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform wordspotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.

Place, publisher, year, edition, pages
2016. p. 307-312
Series
International Conference on Handwriting Recognition, ISSN 2167-6445
Keywords [en]
handwritten word spotting, convolutional neural networks, deep learning, word embeddings
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computerized Image Processing
Identifiers
URN: urn:nbn:se:uu:diva-306667DOI: 10.1109/ICFHR.2016.60ISI: 000400052400056ISBN: 978-1-5090-0981-7 (print)OAI: oai:DiVA.org:uu-306667DiVA, id: diva2:1044046
Conference
15th International Conference on Frontiers in Handwriting Recognition (ICFHR), October 23-26, 2016, Shenzhen, China.
Projects
q2b
Funder
Swedish Research Council, 2012-5743Riksbankens Jubileumsfond, NHS14-2068:1Available from: 2016-11-01 Created: 2016-11-01 Last updated: 2019-04-08
In thesis
1. Learning based Word Search and Visualisation for Historical Manuscript Images
Open this publication in new window or tab >>Learning based Word Search and Visualisation for Historical Manuscript Images
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Today, work with historical manuscripts is nearly exclusively done manually, by researchers in the humanities as well as laypeople mapping out their personal genealogy. This is a highly time consuming endeavour as it is not uncommon to spend months with the same volume of a few hundred pages. The last few decades have seen an ongoing effort to digitise manuscripts, both preservation purposes and to increase accessibility. This has the added effect of enabling the use methods and algorithms from Image Analysis and Machine Learning that have great potential in both making existing work more efficient and creating new methodologies for manuscript-based research.

The first part of this thesis focuses on Word Spotting, the task of searching for a given text query in a manuscript collection. This can be broken down into two tasks, detecting where the words are located on the page, and then ranking the words according to their similarity to a search query. We propose Deep Learning models to do both, separately and then simultaneously, and successfully search through a large manuscript collection consisting of over a hundred thousand pages.

A limiting factor in applying learning-based methods to historical manuscript images is the cost, and therefore, lack of annotated data needed to train machine learning models. We propose several ways to mitigate this problem, including generating synthetic data, augmenting existing data to get better value from it, and learning from pre-existing, partially annotated data that was previously unusable.

In the second part, a method for visualising manuscript collections called the Image-based Word Cloud is proposed. Much like it text-based counterpart, it arranges the most representative words in a collection into a cloud, where the size of the words are proportional to their frequency of occurrence. This grants a user a single image overview of a manuscript collection, regardless of its size. We further propose a way to estimate a manuscripts production date. This can grant historians context that is crucial for correctly interpreting the contents of a manuscript.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2019. p. 82
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 1798
Keywords
Word Spotting, Convolutional Neural Networks, Deep Learning, Region Proposals, Historical Manuscripts, Computer Vision, Image Analysis, Visualisation, Document Analysis
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-381308 (URN)978-91-513-0633-9 (ISBN)
Public defence
2019-06-04, TLS (Tidskriftläsesalen), Carolina Rediviva, Dag Hammarskjölds väg 1, Uppsala, 10:15 (English)
Opponent
Supervisors
Funder
Swedish Research Council, 2012-5743Riksbankens Jubileumsfond, NHS14-2068:1
Available from: 2019-05-13 Created: 2019-04-08 Last updated: 2019-06-18

Open Access in DiVA

fulltext(699 kB)1007 downloads
File information
File name FULLTEXT01.pdfFile size 699 kBChecksum SHA-512
fbc1d8ffa156dfe637ae2423fd2c85ee2055d9a5dc7d1d6498844f88cfe45b7d6f05e7b3aa1893944720538fcc8a5c2f2f3469eb847371531838764bbc85f36d
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Search in DiVA

By author/editor
Wilkinson, TomasBrun, Anders
By organisation
Division of Visual Information and InteractionComputerized Image Analysis and Human-Computer Interaction
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 1009 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 1351 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf