uu.seUppsala University Publications
Change search
Refine search result
1 - 25 of 25
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    A New Virtual Museum Equipped with Automatic Video Content Generator2014Conference paper (Refereed)
    Abstract [en]

    Virtual Museum service has been carried out in many places owing to the advanced video and network technology in recent years. In the virtual museum, people primarily experience the prepared content actively with a mouse, a touch panel and specially designed tangible devices. On the other hand, in a real museum space, people appreciate the artifacts passively walking around the space freely without stress. It can be said that the virtual museum is designed to urge people to deal with it rather actively when compared to the real museum.We have been studying and developing a new type of virtual museum enabling people to participate the space with both active and passive way, by implementing various new functions. In this time, we developed the new virtual museum equipped with a video content generator using the virtual exhibition space modeled with 3D computer graphics (CG). This video content is created in real-time by using the 3DCG-modeled museum space as it is, adding appropriate visual and audio effects such as camerawork, superimposing text, synthesized voice narration, back ground music etc. Since this system is working in the 3DCG space, a user can easily go back and forth between the two modes of watching the video content passively and doing walkthrough in the space actively by a wheel mouse.In this paper, we first introduce primary virtual museums in the world. Then, we describe our method: 1) specially designed walkthrough algorithm, 2) the video content generator using the 3DCG museum space and 3) seamless integration of the 1) and the 2). We then describe our functioning prototype followed by the conclusion and the future plans.

  • 2.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Automatic CG Talk Show Generation from the Internet Forum2016In: Proceedings of SIGRAD2016, 2016Conference paper (Refereed)
    Abstract [en]

    We have developed an application to produce Computer Graphics (CG) animations in TV talk show formatsautomatically from the Internet forum. First, an actual broadcasted talk show is analyzed to obtain data in regardsto camera changes, lighting, studio set up, etc. The result of the analysis is then implemented into the applicationand a CG animation is created using the TV program Making Language (TVML). The application works in theUnity game engine with CG characters speaking with computer generated voices. We have successfully created aCG generated TV talk show which allows users to "watch" the TV show format generated from the text informationcoming from the forum on the Internet.

  • 3.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Automatic Generation of CG Animation from the Internet Forum "2 Channeru"2018In: Journal of the Institute of Image Information and Television Engineers, ISSN 1342-6907, Vol. 72, no 10, p. 189-196Article in journal (Refereed)
    Abstract [ja]

    Web サイトをはじめとする様々な情報をテレビ番組的なCG アニメーションに自動変換する研究開発を進めている.今回,その試みの一つとして,「2ちゃんねる掲示板」からコンピュータ・グラフィックス(CG) アニメーションを自動生成するアプリケーションを開発した.基本的な方法は,実際のテレビ番組映像を分析し,そこで使われている制作ノウハウを抽出して,これをルール化および数値化し,ソフトウェアに実装することで,テレビ番組を真似たCG アニメーションを得るというものである.今回,実際に放送された1 時間分の討論番組映像のカメラスイッチングを解析しアルゴリズム化した.本論文では,このプロセスの詳細を説明し,この方法により作成した実際のアプリケーションについて述べる.また,得られたCG アニメーションについて評価実験を行い,本手法の有効性と今後の課題を明らかにしたので,これについて述べる.

  • 4.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Automatic Generation of Personal Virtual Museum2016In: Proceedings of CyberWorlds2016 / [ed] Sourin, E, 2016, p. 219-222Conference paper (Refereed)
    Abstract [en]

    We have developed a Virtual Museum with real-time 3DCG capable of exhibiting arbitrary planar artifacts such as paintings which have been specified by a user. The pictures are collected from the Internet sites such as Wikimedia Commons by giving bookmarks by the user. The artifact images are displayed in its life size automatically aligned on the wall of the museum with picture frames and generated captions. This process is done based on metadata extracted using a technique called Web scraping to extract necessary information from the target Web sites. The museum space is realistically modeled with high resolution and sophisticated illumination where the user can walk through in the space. The system enables the users to create their own personalized museums with their favorite pictures exhibited in the realistic way.

  • 5.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Building Virtual Museum Exhibition System as a Medium2019In: 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE 2019), IEEE , 2019Conference paper (Refereed)
    Abstract [en]

    We have constructed a circulation system based on a proposed format of exhibition data in the virtual museum. The virtual museum is built with real-time computer graphics that a user can walk through and see displayed artworks. The circulation system of artworks and museum spaces is built on the internet which is similar to that of the e-book. We have successfully established a virtual exhibition system fulfilling the requirements to be a medium. The working system that we have developed is described.

  • 6.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Implementation of the Text-Generated TV2015Conference paper (Refereed)
    Abstract [en]

    This paper describes the implementation of the Text-Generated TV that we had previously proposed. It uses textscript to circulate on the network and a user can watch TV program videos with a specially designed playerconverting the scripts to computer graphics animations. We have developed the Player prototype in Unity gameengine for viewers and deployed the Text-Generated TV broadcast station on the server where actual contentsare ready to view. The software is downloadable and a user can actually watch TV with the player on a PC.

  • 7.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Microtone Analysis of Blues Vocal: Can Hatsune-Miku sing the Blues?2014Conference paper (Refereed)
  • 8.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Open Framework Facilitating Automatic Generation of CG Animation from Web Site2015Conference paper (Refereed)
    Abstract [en]

    We have been studying and developing the systemwhich enables to generate Computer Graphics Animation (CGA)automatically by processing HTML data of Web site. In thispaper, we propose an open framework to facilitate this. Theframework is functioning all at a server side, obtaining theHTML, converting it to a script describing the CGA story andupdating the script. And at a client side, a user accesses the scripton the server to visualize it by using real-time CG character withsynthesized voice, camera work, superimposing, sound fileplayback etc. We have constructed the framework on the serverand deployed the substantial engines to convert Web sites toCGAs. This paper describes the detail of the framework and alsoshows some example projects providing automatically generatedNews show, Talk show and personal Blog visualization.

  • 9.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Text Generated TV: A New Television System Delivering Visual Content Created Directly by Text2014In: Proceedings of IWAIT 2014, 2014Conference paper (Refereed)
    Abstract [en]

    We propose a new television system based on a methodology which circulates a text-based script on the Internet representing visual content instead of transmitting complete video data. This Text Generated TV is realized by the technology called T2V (Text-To-Vision) enabling to create TV-program-like CG animation (CGA) automatically from script. Our new TV system is made by integrating the research results of T2V technology that we have been studying. The new TV system provides Use-Generated-Contents, automatic generated CGA from text sources available on the Internet, and interactive video game like applications in TV context. The Text Generated TV is regarded as one of the object-based content representation. Hence, it has a lot of possibilities and flexibilities and we believe that it has a big potential toward the future TV system. In this paper, we introduce our concept and a prototype development and discussion on new aspects of our approach.

  • 10.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Iguchi, Akihiko
    Astrodesign, Inc..
    Virtual Museum Equipped with Automatic Video Content Generator2016In: ITE Transactions on Media Technology and Applications (MTA), E-ISSN 2186-7364, Vol. 4, no 1, p. 41-48Article in journal (Refereed)
    Abstract [en]

    We have been developing a new type of Virtual Museum which enables users to participate in the space with both active and passive modes of operation. In the "active mode", the new virtual museum provides a user walkthrough using the realistic 3DCG-modeled museum space and artifacts in the space. And in the "passive mode", the system adds desired visual and audio effects such as camerawork, superimposed text, synthesized voice narration, post production processes, background music and so on to give users a TV commentary type of CG animation. Users can easily transition back and forth between the two modes of doing walkthrough in the space actively and watching the video content passively. This paper describes the details of the system design and the implementation followed by a discussion on the functioning prototype.

  • 11.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nigorikawa, Takesato
    ProgMind, Inc..
    System Development Kit of T2V in the Unity: T2V Engine Capable of Converting Script to CG Animation in the Unity Game Engine2014Conference paper (Other academic)
    Abstract [en]

    T2V (Text-To-Vision) is the technology which enables to generate TV-program-like CG animation by computer from a given script. In this time, we have developed a system development kit (SDK) which makes it possible for developers to create various interactive applications in the Unity with utilizing the T2V technology. We first explain the SDK and its usage. Secondly, we introduce two applications made using SDK: 1) Automatic generation of talk show from a bulletin board in the Internet, 2) Interactive quiz application with multi-story structure.

  • 12.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Shishikui, Yoshiaki
    Meiji University, Department of Frontier Media Science, Tokyo, Japan.
    Rap Music Video Generator: Write a Script to Make Your Rap Music Video with Synthesized Voice and CG Animation2017Conference paper (Refereed)
    Abstract [en]

    We have made an application to make rap music video with CG animation by writing out a simple script. Aquestalk and TVML (TV program Making Language) are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with the music beat in the script.

  • 13.
    Hayashi, Masaki
    et al.
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Bachelder, Steven
    Gotland University, School of Game Design, Technology and Learning Processes.
    International Standard of Automatic and Intelligent News Broadcasting System2012In: Proceedings of NICOGRAPH International 2012, 2012, p. 1234-1237Conference paper (Refereed)
    Abstract [en]

    We propose an automatic and intelligent news broadcasting system, which generates full-CG animatednews-shows from original text formats from the Internet. The news broadcasts are delivered to users on multipleplatforms in the language of their choice. Users are also provided with interactive and intelligent news services.This paper introduces the overall system and provides a feasibility test and working model of the news showapplication. The example shown is generated from a HTML Internet news site. We also describe the method ofconstructing a practical system. The future plan of action is to conduct large-scale experiments and field testsand thereafter implement research outcomes into the standardization of the next generation TV broadcast system.

  • 14.
    Hayashi, Masaki
    et al.
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Bachelder, Steven
    Gotland University, School of Game Design, Technology and Learning Processes.
    TV News Show Broadcast Generated Automatically from Data on the Internet2012In: Proceedings of 2012 ITE Annual Conference, 2012, p. 1-2Conference paper (Refereed)
    Abstract [en]

    We propose an automatic news broadcasting system, which generates full-CG animated news-shows fromoriginal text formats from the Internet. This paper introduces the overall system and provides a feasibility test and workingmodel of the news show application generated from a HTML Internet news site. We also describe the future collaboration work.

  • 15.
    Hayashi, Masaki
    et al.
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Bachelder, Steven
    Gotland University, School of Game Design, Technology and Learning Processes.
    Iguchi, Akihiko
    Astrodesign, Inc. Tokyo, Japan.
    Machida, Satoshi
    Astrodesign, Inc. Tokyo, Japan.
    Ultra High Resolution 4K/8K Real-time CG System and Its Application2013In: Proceedings of IWAIT2013, 2013, p. 4-Conference paper (Refereed)
    Abstract [en]

    We propose an 'Ultra-CG project' which promotes an extreme high-definition real-time computer graphics system with a resolution of 4K and 8K (Super Hi-Vision) which is more than the conventional HDTV. It is important for the project to study not only hardware and software requirements but also content creation methodology with which the content is displayed on extreme high-resolution display. We have first developed a functioning test system, which exhibits 'Virtual Museum' in 4K resolution to assess validity of our approach and clarify the tasks toward the further application development. The system consists of a PC with high-speed graphics cards and a 4K monitor. The real-time 3DCG software and all the CG models are built on Unity, which is a 3DCG game engine, used worldwide. We are now considering various feasible applications built on the system such as exhibition, entertainment, medical use and more. This paper describes the test system, discusses the future applications and collaboration of content creators and system engineers.

  • 16.
    Hayashi, Masaki
    et al.
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Bachelder, Steven
    Gotland University, School of Game Design, Technology and Learning Processes.
    Iguchi, Akihiko
    Machida, Satoshi
    Virtual Museum on 4K Ultra-high Resolution Real-time CG System2013In: Proceedings of Virtual Reality Technical Seminar, 2013Conference paper (Other academic)
    Abstract [en]

    We have researched and developed a 'Virtual Museum' in an extreme high-definition real-time computer graphics system with a resolution of 4K and 8K (Super Hi-Vision). We have first developed a functioning test system, which exhibits Japanese artifacts 'Ukiyoe and pannel' in 4K resolution. In our system, the artifacts have been digitized in ultra high-resolution then positioned in a high-quality modeled exhibition space. A user can walkthrough in the exhibition space enabling to view the artifacts in a distance and also to get closer to observe its detailed surface seamlessly. With this method, we have successfully enhanced both sense of being there and sense of realness. In this paper, we first survey several virtual museums in practical use, then explain the detail of our system and introduce experiment results with discussion in comparison with the existing virtual museums.

  • 17.
    Hayashi, Masaki
    et al.
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Bachelder, Steven
    Gotland University, School of Game Design, Technology and Learning Processes.
    Nigorikawa, Takesato
    Implementation of T2V: Text-To-Visio on Game Engine UNITY2013Conference paper (Other academic)
    Abstract [en]

    We have been developing T2V (Text-To-Vision) technology which enables to produce CG animation from given script. We have developed the application called 'T2V Player' and have been distributed it as freeware for years. The application works on Windows PC to produce TV-program-like animation from user input text using real-time CG and voice synthesizing technique, etc. In this paper, we introduce the prototype of 'T2V on UNITY' which has been developed from scratch on the UNITY game engine. We succeeded to enhance its function owing to the UNITY, such as multi-platform, availability of CG character data circulated on UNITY community, capability of applying T2V method to game development and more.

  • 18.
    Hayashi, Masaki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Shishikui, Yoshiaki
    Meiji University.
    Bachelder, Steven
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    ラップスクリプト: テキストを書いて音声合成とCGアニメのラップが作れる2017Conference paper (Other (popular science, discussion, etc.))
    Abstract [en]

    We have made an application to make rap music with CG animation by writing out a simple script. Aquestalk and TVML are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with music beat in the script.

  • 19.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Current Topics in Computer Graphics: Report if SIGGRAPH20142014In: ITE Technical Report, ISSN 1342-6893, Vol. 38, no 39, p. 25-32Article in journal (Refereed)
  • 20.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Current Topics of SIGGRAPH20142014In: ITE journal, ISSN 0162-8178, Vol. 68, no 11, p. 868-873Article in journal (Other academic)
  • 21.
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design. Tokyo Inst Technol, Imaging Sci & Engn Lab, Tokyo, Japan.;Tokyo Inst Technol, Dept Comp Sci, Tokyo, Japan.;Tokyo Inst Technol, Grad Sch Informat Sci & Engn, Tokyo, Japan.;Tokyo Inst Technol, Tokyo, Japan.;Kanagawa Inst Technol, Kanagawa, Japan.;IEICE, Informat & Syst Soc, Oxford, England..
    FOREWORD2016In: IEICE transactions on information and systems, ISSN 0916-8532, E-ISSN 1745-1361, Vol. E99D, no 4, p. 1023-1023Article in journal (Other academic)
  • 22.
    Nakajima, Masayuki
    Gotland University, School of Game Design, Technology and Learning Processes.
    Intelligent CG Making Technology and Intelligent Media2013In: ITE Transactions on Media Technology and Applications, ISSN 2186-7364, Vol. 1, no 1, p. 20-26Article in journal (Refereed)
    Abstract [en]

    In this invited research paper, I will describe the Intelligent CG Making Technology, (ICGMT) productionmethodology and Intelligent Media (IM). I will begin with an explanation of the key aspects of theICGMT and a definition of IM. Thereafter I will explain the three approaches of the ICGMT. These approachesare the reuse of animation data, the making animation from text, and the making animation from natural spokenlanguage. Finally, I will explain current approaches of the ICGMT under development by the Nakajima laboratory.

  • 23.
    Nakajima, Masayuki
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Shirai, Akihiko
    Kanagawa Institute of Technology.
    SIGGRAPH2013 Report2013In: ITE Journal, ISSN 1342-6893, Vol. 67, no 11, p. 955-961p. 955-961Article in journal (Other (popular science, discussion, etc.))
    Abstract [ja]

    今年で 40 回目となる SIGGRAPH2013 は,7 月 21 日(日)から 7 月 25 日(木)まで, 米国カルフォルニア,アナハイム・コンベンションセンターで,例年通り華やかに開催された.SIGGRAPHはACMに属する一つの研究会であり,CG(Computer Graphics)や Interactive技術に関する国際会議であり,CG,Virtual Reality,Art,Image ProcessingやGameに関する最新の試みなどが発表される.本学会における極めて重要な会議である.本会議は,5日間にわたり,並列して多数のイベントが開催されており,全てを聴講することは不可能であるが,今年もその概要を,例年通り報告することにする.

  • 24.
    Xie, Ning
    et al.
    Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Ctr Future Media, Chengdu, Sichuan, Peoples R China..
    Yuan, Tian Ye
    Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Ctr Future Media, Chengdu, Sichuan, Peoples R China..
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design.
    Shen, Hengtao
    Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Ctr Future Media, Chengdu, Sichuan, Peoples R China..
    LipSync Generation based on Discrete Cosine Transform2017In: 2017 Nicograph International (Nicoint), IEEE, 2017, p. 76-79Conference paper (Other academic)
    Abstract [en]

    Nowadays, voice acting plays more advanced in the video games, especially for the role-playing games, animebased games and serious games. In order to enhance the communication, synchronizing the lip and mouth movements naturally is an important part of convincing 3D character performance [1]. In this paper, we propose a lightweight LipSync generation algorithm. According to the heuristic knowledge on the mouth movement in game, extracting the value of voice frequency domain is essential for LipSync in game. Therefore, we analytically convert the problem into Discrete Cosine Transform (DCT) that focuses on extracting the voice frequency domain value by absolute value computing operation so as to avoid redundant computation on phases and modulus of operation in Fourier Transform (FT) voice model. Our experimental results demonstrate that our DCT based method enables to achieve good performance for game making.

  • 25.
    Zhang, Xiaohua
    et al.
    Hiroshima Inst Technol, Hiroshima, Japan..
    Xie, Ning
    Tongji Univ, Shanghai, Peoples R China..
    Nakajima, Masayuki
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of Game Design. Kanagawa Insitute Technol, Atsuki, Japan..
    Cleaning Textual and Non-textual Mixed Color Document Image with Uneven Shading2016In: Proceedings NICOGRAPH International 2016, 2016, p. 136-136Conference paper (Refereed)
    Abstract [en]

    This paper proposes a simple approach for extracting text lines and segmenting image regions from a textual and non-textual region mixed color document image with uneven shading, and finally a clean document image is obtained. Our experimental results demonstrate that the proposed approach performs plausible.

1 - 25 of 25
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf