Virtual Museum service has been carried out in many places owing to the advanced video and network technology in recent years. In the virtual museum, people primarily experience the prepared content actively with a mouse, a touch panel and specially designed tangible devices. On the other hand, in a real museum space, people appreciate the artifacts passively walking around the space freely without stress. It can be said that the virtual museum is designed to urge people to deal with it rather actively when compared to the real museum.We have been studying and developing a new type of virtual museum enabling people to participate the space with both active and passive way, by implementing various new functions. In this time, we developed the new virtual museum equipped with a video content generator using the virtual exhibition space modeled with 3D computer graphics (CG). This video content is created in real-time by using the 3DCG-modeled museum space as it is, adding appropriate visual and audio effects such as camerawork, superimposing text, synthesized voice narration, back ground music etc. Since this system is working in the 3DCG space, a user can easily go back and forth between the two modes of watching the video content passively and doing walkthrough in the space actively by a wheel mouse.In this paper, we first introduce primary virtual museums in the world. Then, we describe our method: 1) specially designed walkthrough algorithm, 2) the video content generator using the 3DCG museum space and 3) seamless integration of the 1) and the 2). We then describe our functioning prototype followed by the conclusion and the future plans.
We have developed an application to produce Computer Graphics (CG) animations in TV talk show formatsautomatically from the Internet forum. First, an actual broadcasted talk show is analyzed to obtain data in regardsto camera changes, lighting, studio set up, etc. The result of the analysis is then implemented into the applicationand a CG animation is created using the TV program Making Language (TVML). The application works in theUnity game engine with CG characters speaking with computer generated voices. We have successfully created aCG generated TV talk show which allows users to "watch" the TV show format generated from the text informationcoming from the forum on the Internet.
Web サイトをはじめとする様々な情報をテレビ番組的なCG アニメーションに自動変換する研究開発を進めている.今回,その試みの一つとして,「2ちゃんねる掲示板」からコンピュータ・グラフィックス(CG) アニメーションを自動生成するアプリケーションを開発した.基本的な方法は,実際のテレビ番組映像を分析し,そこで使われている制作ノウハウを抽出して,これをルール化および数値化し,ソフトウェアに実装することで,テレビ番組を真似たCG アニメーションを得るというものである.今回,実際に放送された1 時間分の討論番組映像のカメラスイッチングを解析しアルゴリズム化した.本論文では,このプロセスの詳細を説明し,この方法により作成した実際のアプリケーションについて述べる.また,得られたCG アニメーションについて評価実験を行い,本手法の有効性と今後の課題を明らかにしたので,これについて述べる.
We have developed a Virtual Museum with real-time 3DCG capable of exhibiting arbitrary planar artifacts such as paintings which have been specified by a user. The pictures are collected from the Internet sites such as Wikimedia Commons by giving bookmarks by the user. The artifact images are displayed in its life size automatically aligned on the wall of the museum with picture frames and generated captions. This process is done based on metadata extracted using a technique called Web scraping to extract necessary information from the target Web sites. The museum space is realistically modeled with high resolution and sophisticated illumination where the user can walk through in the space. The system enables the users to create their own personalized museums with their favorite pictures exhibited in the realistic way.
We have constructed a circulation system based on a proposed format of exhibition data in the virtual museum. The virtual museum is built with real-time computer graphics that a user can walk through and see displayed artworks. The circulation system of artworks and museum spaces is built on the internet which is similar to that of the e-book. We have successfully established a virtual exhibition system fulfilling the requirements to be a medium. The working system that we have developed is described.
This paper describes the implementation of the Text-Generated TV that we had previously proposed. It uses textscript to circulate on the network and a user can watch TV program videos with a specially designed playerconverting the scripts to computer graphics animations. We have developed the Player prototype in Unity gameengine for viewers and deployed the Text-Generated TV broadcast station on the server where actual contentsare ready to view. The software is downloadable and a user can actually watch TV with the player on a PC.
We have been studying and developing the systemwhich enables to generate Computer Graphics Animation (CGA)automatically by processing HTML data of Web site. In thispaper, we propose an open framework to facilitate this. Theframework is functioning all at a server side, obtaining theHTML, converting it to a script describing the CGA story andupdating the script. And at a client side, a user accesses the scripton the server to visualize it by using real-time CG character withsynthesized voice, camera work, superimposing, sound fileplayback etc. We have constructed the framework on the serverand deployed the substantial engines to convert Web sites toCGAs. This paper describes the detail of the framework and alsoshows some example projects providing automatically generatedNews show, Talk show and personal Blog visualization.
We propose a new television system based on a methodology which circulates a text-based script on the Internet representing visual content instead of transmitting complete video data. This Text Generated TV is realized by the technology called T2V (Text-To-Vision) enabling to create TV-program-like CG animation (CGA) automatically from script. Our new TV system is made by integrating the research results of T2V technology that we have been studying. The new TV system provides Use-Generated-Contents, automatic generated CGA from text sources available on the Internet, and interactive video game like applications in TV context. The Text Generated TV is regarded as one of the object-based content representation. Hence, it has a lot of possibilities and flexibilities and we believe that it has a big potential toward the future TV system. In this paper, we introduce our concept and a prototype development and discussion on new aspects of our approach.
We have been developing a new type of Virtual Museum which enables users to participate in the space with both active and passive modes of operation. In the "active mode", the new virtual museum provides a user walkthrough using the realistic 3DCG-modeled museum space and artifacts in the space. And in the "passive mode", the system adds desired visual and audio effects such as camerawork, superimposed text, synthesized voice narration, post production processes, background music and so on to give users a TV commentary type of CG animation. Users can easily transition back and forth between the two modes of doing walkthrough in the space actively and watching the video content passively. This paper describes the details of the system design and the implementation followed by a discussion on the functioning prototype.
T2V (Text-To-Vision) is the technology which enables to generate TV-program-like CG animation by computer from a given script. In this time, we have developed a system development kit (SDK) which makes it possible for developers to create various interactive applications in the Unity with utilizing the T2V technology. We first explain the SDK and its usage. Secondly, we introduce two applications made using SDK: 1) Automatic generation of talk show from a bulletin board in the Internet, 2) Interactive quiz application with multi-story structure.
We have made an application to make rap music video with CG animation by writing out a simple script. Aquestalk and TVML (TV program Making Language) are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with the music beat in the script.
We propose an automatic and intelligent news broadcasting system, which generates full-CG animatednews-shows from original text formats from the Internet. The news broadcasts are delivered to users on multipleplatforms in the language of their choice. Users are also provided with interactive and intelligent news services.This paper introduces the overall system and provides a feasibility test and working model of the news showapplication. The example shown is generated from a HTML Internet news site. We also describe the method ofconstructing a practical system. The future plan of action is to conduct large-scale experiments and field testsand thereafter implement research outcomes into the standardization of the next generation TV broadcast system.
We propose an automatic news broadcasting system, which generates full-CG animated news-shows fromoriginal text formats from the Internet. This paper introduces the overall system and provides a feasibility test and workingmodel of the news show application generated from a HTML Internet news site. We also describe the future collaboration work.
We propose an 'Ultra-CG project' which promotes an extreme high-definition real-time computer graphics system with a resolution of 4K and 8K (Super Hi-Vision) which is more than the conventional HDTV. It is important for the project to study not only hardware and software requirements but also content creation methodology with which the content is displayed on extreme high-resolution display. We have first developed a functioning test system, which exhibits 'Virtual Museum' in 4K resolution to assess validity of our approach and clarify the tasks toward the further application development. The system consists of a PC with high-speed graphics cards and a 4K monitor. The real-time 3DCG software and all the CG models are built on Unity, which is a 3DCG game engine, used worldwide. We are now considering various feasible applications built on the system such as exhibition, entertainment, medical use and more. This paper describes the test system, discusses the future applications and collaboration of content creators and system engineers.
We have researched and developed a 'Virtual Museum' in an extreme high-definition real-time computer graphics system with a resolution of 4K and 8K (Super Hi-Vision). We have first developed a functioning test system, which exhibits Japanese artifacts 'Ukiyoe and pannel' in 4K resolution. In our system, the artifacts have been digitized in ultra high-resolution then positioned in a high-quality modeled exhibition space. A user can walkthrough in the exhibition space enabling to view the artifacts in a distance and also to get closer to observe its detailed surface seamlessly. With this method, we have successfully enhanced both sense of being there and sense of realness. In this paper, we first survey several virtual museums in practical use, then explain the detail of our system and introduce experiment results with discussion in comparison with the existing virtual museums.
We have been developing T2V (Text-To-Vision) technology which enables to produce CG animation from given script. We have developed the application called 'T2V Player' and have been distributed it as freeware for years. The application works on Windows PC to produce TV-program-like animation from user input text using real-time CG and voice synthesizing technique, etc. In this paper, we introduce the prototype of 'T2V on UNITY' which has been developed from scratch on the UNITY game engine. We succeeded to enhance its function owing to the UNITY, such as multi-platform, availability of CG character data circulated on UNITY community, capability of applying T2V method to game development and more.
We have made an application to make rap music with CG animation by writing out a simple script. Aquestalk and TVML are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with music beat in the script.
In this invited research paper, I will describe the Intelligent CG Making Technology, (ICGMT) productionmethodology and Intelligent Media (IM). I will begin with an explanation of the key aspects of theICGMT and a definition of IM. Thereafter I will explain the three approaches of the ICGMT. These approachesare the reuse of animation data, the making animation from text, and the making animation from natural spokenlanguage. Finally, I will explain current approaches of the ICGMT under development by the Nakajima laboratory.
今年で 40 回目となる SIGGRAPH2013 は,7 月 21 日(日)から 7 月 25 日(木)まで, 米国カルフォルニア,アナハイム・コンベンションセンターで,例年通り華やかに開催された.SIGGRAPHはACMに属する一つの研究会であり,CG(Computer Graphics)や Interactive技術に関する国際会議であり,CG,Virtual Reality,Art,Image ProcessingやGameに関する最新の試みなどが発表される.本学会における極めて重要な会議である.本会議は,5日間にわたり,並列して多数のイベントが開催されており,全てを聴講することは不可能であるが,今年もその概要を,例年通り報告することにする.
Nowadays, voice acting plays more advanced in the video games, especially for the role-playing games, animebased games and serious games. In order to enhance the communication, synchronizing the lip and mouth movements naturally is an important part of convincing 3D character performance [1]. In this paper, we propose a lightweight LipSync generation algorithm. According to the heuristic knowledge on the mouth movement in game, extracting the value of voice frequency domain is essential for LipSync in game. Therefore, we analytically convert the problem into Discrete Cosine Transform (DCT) that focuses on extracting the voice frequency domain value by absolute value computing operation so as to avoid redundant computation on phases and modulus of operation in Fourier Transform (FT) voice model. Our experimental results demonstrate that our DCT based method enables to achieve good performance for game making.
This paper proposes a simple approach for extracting text lines and segmenting image regions from a textual and non-textual region mixed color document image with uneven shading, and finally a clean document image is obtained. Our experimental results demonstrate that the proposed approach performs plausible.