uu.seUppsala University Publications
Change search
Refine search result
1 - 21 of 21
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Eriksson, Johan
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Art History.
    Widén, Per
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of History of Science and Ideas. Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Art History.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Hayashi, Masaki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Blickar och betydelser: Digitala rekonstruktioner av tavelhängningarna på Stockholms slott 1795–18662019In: 1700-tal: Nordic Journal for Eighteenth-Century Studies, ISSN ISSN 1652-4772, Vol. 16, p. 79-103Article in journal (Refereed)
    Abstract [en]

    During the years 1795–1866 the Swedish national art collection, today’s Nationalmuseum, was on display at the Royal Palace in Stockholm at what was known as Kongl. Museum. This museum consisted of two sculpture galleries adjacent to the palace garden Logården and a paintings gallery and a few more rooms on the second floor. While the sculpture galleries are well known as well as reconstructed in situ, there has been much less research on the display of paintings at the museum. In the cross disciplinary research project »Virtual Museum at the Royal Palace» we are using a digital 3D model to reconstruct the display of paintings in the so-called smaller gallery of the palace, as it was displayed during the period. The reconstruction deals with two different hangings of the gallery, in 1795 and c. 1843, made by the curators Carl Fredric Fredenheim and Lars Jacob von Röök respectively. Our preliminary findings show that, contrary to earlier claims, the two hangings are rather different and constructed on quite different ideologies of museum display, something that is possible to see thanks to the method of using digital 3D-models as a basis for analysis.

  • 2.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    A New Virtual Museum Equipped with Automatic Video Content Generator2014Conference paper (Refereed)
    Abstract [en]

    Virtual Museum service has been carried out in many places owing to the advanced video and network technology in recent years. In the virtual museum, people primarily experience the prepared content actively with a mouse, a touch panel and specially designed tangible devices. On the other hand, in a real museum space, people appreciate the artifacts passively walking around the space freely without stress. It can be said that the virtual museum is designed to urge people to deal with it rather actively when compared to the real museum.We have been studying and developing a new type of virtual museum enabling people to participate the space with both active and passive way, by implementing various new functions. In this time, we developed the new virtual museum equipped with a video content generator using the virtual exhibition space modeled with 3D computer graphics (CG). This video content is created in real-time by using the 3DCG-modeled museum space as it is, adding appropriate visual and audio effects such as camerawork, superimposing text, synthesized voice narration, back ground music etc. Since this system is working in the 3DCG space, a user can easily go back and forth between the two modes of watching the video content passively and doing walkthrough in the space actively by a wheel mouse.In this paper, we first introduce primary virtual museums in the world. Then, we describe our method: 1) specially designed walkthrough algorithm, 2) the video content generator using the 3DCG museum space and 3) seamless integration of the 1) and the 2). We then describe our functioning prototype followed by the conclusion and the future plans.

  • 3.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Automatic CG Talk Show Generation from the Internet Forum2016In: Proceedings of SIGRAD2016, 2016Conference paper (Refereed)
    Abstract [en]

    We have developed an application to produce Computer Graphics (CG) animations in TV talk show formatsautomatically from the Internet forum. First, an actual broadcasted talk show is analyzed to obtain data in regardsto camera changes, lighting, studio set up, etc. The result of the analysis is then implemented into the applicationand a CG animation is created using the TV program Making Language (TVML). The application works in theUnity game engine with CG characters speaking with computer generated voices. We have successfully created aCG generated TV talk show which allows users to "watch" the TV show format generated from the text informationcoming from the forum on the Internet.

  • 4.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Automatic Generation of CG Animation from the Internet Forum "2 Channeru"2018In: Journal of the Institute of Image Information and Television Engineers, ISSN 1342-6907, Vol. 72, no 10, p. 189-196Article in journal (Refereed)
    Abstract [ja]

    Web サイトをはじめとする様々な情報をテレビ番組的なCG アニメーションに自動変換する研究開発を進めている.今回,その試みの一つとして,「2ちゃんねる掲示板」からコンピュータ・グラフィックス(CG) アニメーションを自動生成するアプリケーションを開発した.基本的な方法は,実際のテレビ番組映像を分析し,そこで使われている制作ノウハウを抽出して,これをルール化および数値化し,ソフトウェアに実装することで,テレビ番組を真似たCG アニメーションを得るというものである.今回,実際に放送された1 時間分の討論番組映像のカメラスイッチングを解析しアルゴリズム化した.本論文では,このプロセスの詳細を説明し,この方法により作成した実際のアプリケーションについて述べる.また,得られたCG アニメーションについて評価実験を行い,本手法の有効性と今後の課題を明らかにしたので,これについて述べる.

  • 5.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Automatic Generation of Personal Virtual Museum2016In: Proceedings of CyberWorlds2016 / [ed] Sourin, E, 2016, p. 219-222Conference paper (Refereed)
    Abstract [en]

    We have developed a Virtual Museum with real-time 3DCG capable of exhibiting arbitrary planar artifacts such as paintings which have been specified by a user. The pictures are collected from the Internet sites such as Wikimedia Commons by giving bookmarks by the user. The artifact images are displayed in its life size automatically aligned on the wall of the museum with picture frames and generated captions. This process is done based on metadata extracted using a technique called Web scraping to extract necessary information from the target Web sites. The museum space is realistically modeled with high resolution and sophisticated illumination where the user can walk through in the space. The system enables the users to create their own personalized museums with their favorite pictures exhibited in the realistic way.

  • 6.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Building Virtual Museum Exhibition System as a Medium2019In: 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE 2019), IEEE , 2019Conference paper (Refereed)
    Abstract [en]

    We have constructed a circulation system based on a proposed format of exhibition data in the virtual museum. The virtual museum is built with real-time computer graphics that a user can walk through and see displayed artworks. The circulation system of artworks and museum spaces is built on the internet which is similar to that of the e-book. We have successfully established a virtual exhibition system fulfilling the requirements to be a medium. The working system that we have developed is described.

  • 7.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Implementation of the Text-Generated TV2015Conference paper (Refereed)
    Abstract [en]

    This paper describes the implementation of the Text-Generated TV that we had previously proposed. It uses textscript to circulate on the network and a user can watch TV program videos with a specially designed playerconverting the scripts to computer graphics animations. We have developed the Player prototype in Unity gameengine for viewers and deployed the Text-Generated TV broadcast station on the server where actual contentsare ready to view. The software is downloadable and a user can actually watch TV with the player on a PC.

  • 8.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Microtone Analysis of Blues Vocal: Can Hatsune-Miku sing the Blues?2014Conference paper (Refereed)
  • 9.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Open Framework Facilitating Automatic Generation of CG Animation from Web Site2015Conference paper (Refereed)
    Abstract [en]

    We have been studying and developing the systemwhich enables to generate Computer Graphics Animation (CGA)automatically by processing HTML data of Web site. In thispaper, we propose an open framework to facilitate this. Theframework is functioning all at a server side, obtaining theHTML, converting it to a script describing the CGA story andupdating the script. And at a client side, a user accesses the scripton the server to visualize it by using real-time CG character withsynthesized voice, camera work, superimposing, sound fileplayback etc. We have constructed the framework on the serverand deployed the substantial engines to convert Web sites toCGAs. This paper describes the detail of the framework and alsoshows some example projects providing automatically generatedNews show, Talk show and personal Blog visualization.

  • 10.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Text Generated TV: A New Television System Delivering Visual Content Created Directly by Text2014In: Proceedings of IWAIT 2014, 2014Conference paper (Refereed)
    Abstract [en]

    We propose a new television system based on a methodology which circulates a text-based script on the Internet representing visual content instead of transmitting complete video data. This Text Generated TV is realized by the technology called T2V (Text-To-Vision) enabling to create TV-program-like CG animation (CGA) automatically from script. Our new TV system is made by integrating the research results of T2V technology that we have been studying. The new TV system provides Use-Generated-Contents, automatic generated CGA from text sources available on the Internet, and interactive video game like applications in TV context. The Text Generated TV is regarded as one of the object-based content representation. Hence, it has a lot of possibilities and flexibilities and we believe that it has a big potential toward the future TV system. In this paper, we introduce our concept and a prototype development and discussion on new aspects of our approach.

  • 11.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Iguchi, Akihiko
    Astrodesign, Inc..
    Virtual Museum Equipped with Automatic Video Content Generator2016In: ITE Transactions on Media Technology and Applications (MTA), E-ISSN 2186-7364, Vol. 4, no 1, p. 41-48Article in journal (Refereed)
    Abstract [en]

    We have been developing a new type of Virtual Museum which enables users to participate in the space with both active and passive modes of operation. In the "active mode", the new virtual museum provides a user walkthrough using the realistic 3DCG-modeled museum space and artifacts in the space. And in the "passive mode", the system adds desired visual and audio effects such as camerawork, superimposed text, synthesized voice narration, post production processes, background music and so on to give users a TV commentary type of CG animation. Users can easily transition back and forth between the two modes of doing walkthrough in the space actively and watching the video content passively. This paper describes the details of the system design and the implementation followed by a discussion on the functioning prototype.

  • 12.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nigorikawa, Takesato
    ProgMind, Inc..
    System Development Kit of T2V in the Unity: T2V Engine Capable of Converting Script to CG Animation in the Unity Game Engine2014Conference paper (Other academic)
    Abstract [en]

    T2V (Text-To-Vision) is the technology which enables to generate TV-program-like CG animation by computer from a given script. In this time, we have developed a system development kit (SDK) which makes it possible for developers to create various interactive applications in the Unity with utilizing the T2V technology. We first explain the SDK and its usage. Secondly, we introduce two applications made using SDK: 1) Automatic generation of talk show from a bulletin board in the Internet, 2) Interactive quiz application with multi-story structure.

  • 13.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Shishikui, Yoshiaki
    Meiji University, Department of Frontier Media Science, Tokyo, Japan.
    Rap Music Video Generator: Write a Script to Make Your Rap Music Video with Synthesized Voice and CG Animation2017Conference paper (Refereed)
    Abstract [en]

    We have made an application to make rap music video with CG animation by writing out a simple script. Aquestalk and TVML (TV program Making Language) are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with the music beat in the script.

  • 14.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Tsuruta, Naoya
    Tokyo University of Technology, Tokyo, Japan.
    Sasaki, Kazuo
    Tokyo University of Technology, Tokyo, Japan.
    Kondo, Kunio
    Tokyo University of Technology, Tokyo, Japan.
    Wordpress-based Blog System with a Capability of Showing Entries by TV-program-like CG Animations2018Conference paper (Other academic)
    Abstract [en]

    We have developed a blog system with Wordpress which has a capability of automatically converting its blog entries into CG animations. By embedding necessary functions in Wordpress' s theme, a user can select this theme and construct a blog where readers can watch CG animation instead of reading the blog entry. The CG animation is created by TVML (TV program Making Language) technology which enables to make a TV-program-like animation with CG characters with synthesized voices, image data display, sound data playback, superimposing and so on. Not only to contribute to the Wordpress community, in particular, expected as an auxiliary function of the blog service, but also this could work as a production tool for creating various types of CG animations rather than blogging assistance.

  • 15.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Tsuruta, Naoya
    School of Media Science, Tokyo University of Technology.
    Teraoka, Takehiro
    School of Media Science, Tokyo University of Technology.
    Sasaki, Kazuo
    School of Media Science, Tokyo University of Technology.
    Usami, Wataru
    School of Media Science, Tokyo University of Technology.
    Mikami, Koji
    School of Media Science, Tokyo University of Technology.
    Kikuchi, Tsukasa
    School of Media Science, Tokyo University of Technology.
    Takeshima, Yuriko
    School of Media Science, Tokyo University of Technology.
    Kondo, Kunio
    School of Media Science, Tokyo University of Technology.
    Automatic Generation of a TV Programme from Blog Entries2018In: Adjunct Proceedings of ACM TVX2018, 2018Conference paper (Refereed)
    Abstract [en]

    TVML (TV program Making Language) is a technology capable of obtaining TV (television)-programme-like Computer Graphics (CG) animation by writing text script. We have originally developed TVML and have been studying generative contents with the aid of TVML. This time, we have created an application that automatically converts blog posts into CG animations with TV news show format. The process is: 1) to fetch HTML of the blog posts and perform Web scraping and natural language processing to obtain summarized speech texts, 2) to automatically give a show format obtained from the analysis of professional TV programme to get TVML script, 3) to apply the CG character and artworks etc. that fit the blog content to obtain the final CG animation. In the demo session, we will explain the method and will demonstrate the working application on a PC connected to the Internet showing CG animations actually created on site.

  • 16.
    Hayashi, Masaki
    et al.
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Bachelder, Steven
    Gotland University, School of Game Design, Technology and Learning Processes.
    International Standard of Automatic and Intelligent News Broadcasting System2012In: Proceedings of NICOGRAPH International 2012, 2012, p. 1234-1237Conference paper (Refereed)
    Abstract [en]

    We propose an automatic and intelligent news broadcasting system, which generates full-CG animatednews-shows from original text formats from the Internet. The news broadcasts are delivered to users on multipleplatforms in the language of their choice. Users are also provided with interactive and intelligent news services.This paper introduces the overall system and provides a feasibility test and working model of the news showapplication. The example shown is generated from a HTML Internet news site. We also describe the method ofconstructing a practical system. The future plan of action is to conduct large-scale experiments and field testsand thereafter implement research outcomes into the standardization of the next generation TV broadcast system.

  • 17.
    Hayashi, Masaki
    et al.
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Bachelder, Steven
    Gotland University, School of Game Design, Technology and Learning Processes.
    TV News Show Broadcast Generated Automatically from Data on the Internet2012In: Proceedings of 2012 ITE Annual Conference, 2012, p. 1-2Conference paper (Refereed)
    Abstract [en]

    We propose an automatic news broadcasting system, which generates full-CG animated news-shows fromoriginal text formats from the Internet. This paper introduces the overall system and provides a feasibility test and workingmodel of the news show application generated from a HTML Internet news site. We also describe the future collaboration work.

  • 18.
    Hayashi, Masaki
    et al.
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Bachelder, Steven
    Gotland University, School of Game Design, Technology and Learning Processes.
    Iguchi, Akihiko
    Astrodesign, Inc. Tokyo, Japan.
    Machida, Satoshi
    Astrodesign, Inc. Tokyo, Japan.
    Ultra High Resolution 4K/8K Real-time CG System and Its Application2013In: Proceedings of IWAIT2013, 2013, p. 4-Conference paper (Refereed)
    Abstract [en]

    We propose an 'Ultra-CG project' which promotes an extreme high-definition real-time computer graphics system with a resolution of 4K and 8K (Super Hi-Vision) which is more than the conventional HDTV. It is important for the project to study not only hardware and software requirements but also content creation methodology with which the content is displayed on extreme high-resolution display. We have first developed a functioning test system, which exhibits 'Virtual Museum' in 4K resolution to assess validity of our approach and clarify the tasks toward the further application development. The system consists of a PC with high-speed graphics cards and a 4K monitor. The real-time 3DCG software and all the CG models are built on Unity, which is a 3DCG game engine, used worldwide. We are now considering various feasible applications built on the system such as exhibition, entertainment, medical use and more. This paper describes the test system, discusses the future applications and collaboration of content creators and system engineers.

  • 19.
    Hayashi, Masaki
    et al.
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Bachelder, Steven
    Gotland University, School of Game Design, Technology and Learning Processes.
    Iguchi, Akihiko
    Machida, Satoshi
    Virtual Museum on 4K Ultra-high Resolution Real-time CG System2013In: Proceedings of Virtual Reality Technical Seminar, 2013Conference paper (Other academic)
    Abstract [en]

    We have researched and developed a 'Virtual Museum' in an extreme high-definition real-time computer graphics system with a resolution of 4K and 8K (Super Hi-Vision). We have first developed a functioning test system, which exhibits Japanese artifacts 'Ukiyoe and pannel' in 4K resolution. In our system, the artifacts have been digitized in ultra high-resolution then positioned in a high-quality modeled exhibition space. A user can walkthrough in the exhibition space enabling to view the artifacts in a distance and also to get closer to observe its detailed surface seamlessly. With this method, we have successfully enhanced both sense of being there and sense of realness. In this paper, we first survey several virtual museums in practical use, then explain the detail of our system and introduce experiment results with discussion in comparison with the existing virtual museums.

  • 20.
    Hayashi, Masaki
    et al.
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Bachelder, Steven
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nigorikawa, Takesato
    Implementation of T2V: Text-To-Visio on Game Engine UNITY2013Conference paper (Other academic)
    Abstract [en]

    We have been developing T2V (Text-To-Vision) technology which enables to produce CG animation from given script. We have developed the application called 'T2V Player' and have been distributed it as freeware for years. The application works on Windows PC to produce TV-program-like animation from user input text using real-time CG and voice synthesizing technique, etc. In this paper, we introduce the prototype of 'T2V on UNITY' which has been developed from scratch on the UNITY game engine. We succeeded to enhance its function owing to the UNITY, such as multi-platform, availability of CG character data circulated on UNITY community, capability of applying T2V method to game development and more.

  • 21.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Shishikui, Yoshiaki
    Meiji University.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    ラップスクリプト: テキストを書いて音声合成とCGアニメのラップが作れる2017Conference paper (Other (popular science, discussion, etc.))
    Abstract [en]

    We have made an application to make rap music with CG animation by writing out a simple script. Aquestalk and TVML are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with music beat in the script.

1 - 21 of 21
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf