uu.seUppsala University Publications
Change search
Link to record
Permanent link

Direct link
BETA
Nakajima, Masayuki
Publications (10 of 24) Show all publications
Hayashi, M., Bachelder, S. & Nakajima, M. (2018). Automatic Generation of CG Animation from the Internet Forum "2 Channeru". Journal of the Institute of Image Information and Television Engineers, 72(10), 189-196
Open this publication in new window or tab >>Automatic Generation of CG Animation from the Internet Forum "2 Channeru"
2018 (Japanese)In: Journal of the Institute of Image Information and Television Engineers, ISSN 1342-6907, Vol. 72, no 10, p. 189-196Article in journal (Refereed) Published
Abstract [ja]

Web サイトをはじめとする様々な情報をテレビ番組的なCG アニメーションに自動変換する研究開発を進めている.今回,その試みの一つとして,「2ちゃんねる掲示板」からコンピュータ・グラフィックス(CG) アニメーションを自動生成するアプリケーションを開発した.基本的な方法は,実際のテレビ番組映像を分析し,そこで使われている制作ノウハウを抽出して,これをルール化および数値化し,ソフトウェアに実装することで,テレビ番組を真似たCG アニメーションを得るというものである.今回,実際に放送された1 時間分の討論番組映像のカメラスイッチングを解析しアルゴリズム化した.本論文では,このプロセスの詳細を説明し,この方法により作成した実際のアプリケーションについて述べる.また,得られたCG アニメーションについて評価実験を行い,本手法の有効性と今後の課題を明らかにしたので,これについて述べる.

Place, publisher, year, edition, pages
Tokyo, Japan: , 2018
Keywords
Automatic content generation, CG animation, Internet forum, Media conversion
National Category
Computer Sciences Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:uu:diva-368979 (URN)10.3169/itej.72.J189 (DOI)
Available from: 2018-12-10 Created: 2018-12-10 Last updated: 2018-12-11Bibliographically approved
Xie, N., Yuan, T. Y., Nakajima, M. & Shen, H. (2017). LipSync Generation based on Discrete Cosine Transform. In: 2017 Nicograph International (Nicoint): . Paper presented at 16th NICOGRAPH International Conference (NICOInt), Kyoto Univ, Kyoto, Japan, Juny 02-03, 2017 (pp. 76-79). IEEE
Open this publication in new window or tab >>LipSync Generation based on Discrete Cosine Transform
2017 (English)In: 2017 Nicograph International (Nicoint), IEEE, 2017, p. 76-79Conference paper, Published paper (Other academic)
Abstract [en]

Nowadays, voice acting plays more advanced in the video games, especially for the role-playing games, animebased games and serious games. In order to enhance the communication, synchronizing the lip and mouth movements naturally is an important part of convincing 3D character performance [1]. In this paper, we propose a lightweight LipSync generation algorithm. According to the heuristic knowledge on the mouth movement in game, extracting the value of voice frequency domain is essential for LipSync in game. Therefore, we analytically convert the problem into Discrete Cosine Transform (DCT) that focuses on extracting the voice frequency domain value by absolute value computing operation so as to avoid redundant computation on phases and modulus of operation in Fourier Transform (FT) voice model. Our experimental results demonstrate that our DCT based method enables to achieve good performance for game making.

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
Computer Graphics Applications, Animation, DCT
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-350706 (URN)10.1109/NICOInt.2017.45 (DOI)000425229800015 ()978-1-5090-5332-2 (ISBN)
Conference
16th NICOGRAPH International Conference (NICOInt), Kyoto Univ, Kyoto, Japan, Juny 02-03, 2017
Available from: 2018-05-17 Created: 2018-05-17 Last updated: 2018-05-17Bibliographically approved
Hayashi, M., Bachelder, S., Nakajima, M. & Shishikui, Y. (2017). Rap Music Video Generator: Write a Script to Make Your Rap Music Video with Synthesized Voice and CG Animation. In: : . Paper presented at 2017 IEEE 6th Global Conference on Consumer Electronics (GCCE, 24-27 Oct. 2017..
Open this publication in new window or tab >>Rap Music Video Generator: Write a Script to Make Your Rap Music Video with Synthesized Voice and CG Animation
2017 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

We have made an application to make rap music video with CG animation by writing out a simple script. Aquestalk and TVML (TV program Making Language) are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with the music beat in the script.

Keywords
CG animation, Voice synthesis, Music, TVML, Aquestalk
National Category
Computer Sciences
Research subject
Computing Science
Identifiers
urn:nbn:se:uu:diva-336338 (URN)10.1109/GCCE.2017.8229189 (DOI)000426994600004 ()978-1-5090-4046-9 (ISBN)978-1-5090-4045-2 (ISBN)
Conference
2017 IEEE 6th Global Conference on Consumer Electronics (GCCE, 24-27 Oct. 2017.
Available from: 2017-12-13 Created: 2017-12-13 Last updated: 2018-06-29Bibliographically approved
Hayashi, M., Shishikui, Y., Bachelder, S. & Nakajima, M. (2017). ラップスクリプト: テキストを書いて音声合成とCGアニメのラップが作れる. In: : . Paper presented at Art & Science Forum.
Open this publication in new window or tab >>ラップスクリプト: テキストを書いて音声合成とCGアニメのラップが作れる
2017 (Japanese)Conference paper, Poster (with or without abstract) (Other (popular science, discussion, etc.))
Abstract [en]

We have made an application to make rap music with CG animation by writing out a simple script. Aquestalk and TVML are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with music beat in the script.

Keywords
CG animation,Voice synthesis,Music, TVML,Aquestalk
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-336334 (URN)
Conference
Art & Science Forum
Note

English title: Rap making script : you can make your own rap with synthesized voice and CG animation by writing a script

Available from: 2017-12-13 Created: 2017-12-13 Last updated: 2018-01-13Bibliographically approved
Hayashi, M., Bachelder, S. & Nakajima, M. (2016). Automatic CG Talk Show Generation from the Internet Forum. In: Proceedings of SIGRAD2016: . Paper presented at SIGRAD2016.
Open this publication in new window or tab >>Automatic CG Talk Show Generation from the Internet Forum
2016 (English)In: Proceedings of SIGRAD2016, 2016Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

We have developed an application to produce Computer Graphics (CG) animations in TV talk show formatsautomatically from the Internet forum. First, an actual broadcasted talk show is analyzed to obtain data in regardsto camera changes, lighting, studio set up, etc. The result of the analysis is then implemented into the applicationand a CG animation is created using the TV program Making Language (TVML). The application works in theUnity game engine with CG characters speaking with computer generated voices. We have successfully created aCG generated TV talk show which allows users to "watch" the TV show format generated from the text informationcoming from the forum on the Internet.

Keywords
CG, TVML, animation, automatic media conversion
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:uu:diva-314242 (URN)
Conference
SIGRAD2016
Available from: 2017-01-31 Created: 2017-01-31 Last updated: 2017-01-31Bibliographically approved
Hayashi, M., Bachelder, S. & Nakajima, M. (2016). Automatic Generation of Personal Virtual Museum. In: Sourin, E (Ed.), Proceedings of CyberWorlds2016: . Paper presented at CyberWorlds2016, Chongqing Univ Technol, Chongqing, PEOPLES R CHINA, SEP 28-30, 2016 (pp. 219-222).
Open this publication in new window or tab >>Automatic Generation of Personal Virtual Museum
2016 (English)In: Proceedings of CyberWorlds2016 / [ed] Sourin, E, 2016, p. 219-222Conference paper, Published paper (Refereed)
Abstract [en]

We have developed a Virtual Museum with real-time 3DCG capable of exhibiting arbitrary planar artifacts such as paintings which have been specified by a user. The pictures are collected from the Internet sites such as Wikimedia Commons by giving bookmarks by the user. The artifact images are displayed in its life size automatically aligned on the wall of the museum with picture frames and generated captions. This process is done based on metadata extracted using a technique called Web scraping to extract necessary information from the target Web sites. The museum space is realistically modeled with high resolution and sophisticated illumination where the user can walk through in the space. The system enables the users to create their own personalized museums with their favorite pictures exhibited in the realistic way.

Keywords
CG, Virtual museum, Digital heritage, Virtual reality, Automatic content generation
National Category
Computer Systems
Research subject
Computing Science
Identifiers
urn:nbn:se:uu:diva-314241 (URN)10.1109/CW.2016.44 (DOI)000390769400036 ()9781509023035 (ISBN)
Conference
CyberWorlds2016, Chongqing Univ Technol, Chongqing, PEOPLES R CHINA, SEP 28-30, 2016
Available from: 2017-01-31 Created: 2017-01-31 Last updated: 2017-02-08Bibliographically approved
Zhang, X., Xie, N. & Nakajima, M. (2016). Cleaning Textual and Non-textual Mixed Color Document Image with Uneven Shading. In: Proceedings NICOGRAPH International 2016: . Paper presented at 15th Nicograph International Conference (NicoInt), JUL 06-08, 2016, Hangzhou Dianzi Univ, Hangzhou, PEOPLES R CHINA (pp. 136-136).
Open this publication in new window or tab >>Cleaning Textual and Non-textual Mixed Color Document Image with Uneven Shading
2016 (English)In: Proceedings NICOGRAPH International 2016, 2016, p. 136-136Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes a simple approach for extracting text lines and segmenting image regions from a textual and non-textual region mixed color document image with uneven shading, and finally a clean document image is obtained. Our experimental results demonstrate that the proposed approach performs plausible.

Keywords
document image, uneven shading, text lines, non-textual, binarization, connected component
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:uu:diva-313628 (URN)10.1109/NicoInt.2016.29 (DOI)000389249400029 ()9781509023059 (ISBN)
Conference
15th Nicograph International Conference (NicoInt), JUL 06-08, 2016, Hangzhou Dianzi Univ, Hangzhou, PEOPLES R CHINA
Available from: 2017-01-23 Created: 2017-01-23 Last updated: 2018-01-13Bibliographically approved
Nakajima, M. (2016). FOREWORD. IEICE transactions on information and systems, E99D(4), 1023-1023
Open this publication in new window or tab >>FOREWORD
2016 (English)In: IEICE transactions on information and systems, ISSN 0916-8532, E-ISSN 1745-1361, Vol. E99D, no 4, p. 1023-1023Article in journal, Editorial material (Other academic) Published
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-299388 (URN)000375973800027 ()
Available from: 2016-07-18 Created: 2016-07-18 Last updated: 2018-01-10Bibliographically approved
Hayashi, M., Bachelder, S., Nakajima, M. & Iguchi, A. (2016). Virtual Museum Equipped with Automatic Video Content Generator. ITE Transactions on Media Technology and Applications (MTA), 4(1), 41-48
Open this publication in new window or tab >>Virtual Museum Equipped with Automatic Video Content Generator
2016 (English)In: ITE Transactions on Media Technology and Applications (MTA), E-ISSN 2186-7364, Vol. 4, no 1, p. 41-48Article in journal (Refereed) Published
Abstract [en]

We have been developing a new type of Virtual Museum which enables users to participate in the space with both active and passive modes of operation. In the "active mode", the new virtual museum provides a user walkthrough using the realistic 3DCG-modeled museum space and artifacts in the space. And in the "passive mode", the system adds desired visual and audio effects such as camerawork, superimposed text, synthesized voice narration, post production processes, background music and so on to give users a TV commentary type of CG animation. Users can easily transition back and forth between the two modes of doing walkthrough in the space actively and watching the video content passively. This paper describes the details of the system design and the implementation followed by a discussion on the functioning prototype.

Place, publisher, year, edition, pages
Tokyo, Japan: , 2016
Keywords
virtual museum, video content, walkthrough, real-time CG, TVML, 4K
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:uu:diva-268598 (URN)10.3169/mta.4.41 (DOI)
Available from: 2015-12-08 Created: 2015-12-08 Last updated: 2017-02-15Bibliographically approved
Hayashi, M., Bachelder, S. & Nakajima, M. (2015). Implementation of the Text-Generated TV. In: : . Paper presented at 14th annual international conference “NICOGRAPH International 2015”, June 13-14 2015, Tokyo, Japan.
Open this publication in new window or tab >>Implementation of the Text-Generated TV
2015 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

This paper describes the implementation of the Text-Generated TV that we had previously proposed. It uses textscript to circulate on the network and a user can watch TV program videos with a specially designed playerconverting the scripts to computer graphics animations. We have developed the Player prototype in Unity gameengine for viewers and deployed the Text-Generated TV broadcast station on the server where actual contentsare ready to view. The software is downloadable and a user can actually watch TV with the player on a PC.

National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:uu:diva-268604 (URN)
Conference
14th annual international conference “NICOGRAPH International 2015”, June 13-14 2015, Tokyo, Japan
Available from: 2015-12-08 Created: 2015-12-08 Last updated: 2017-01-26Bibliographically approved
Organisations

Search in DiVA

Show all publications