Museums provide not only audio guides but also video guides. But the video guide production needs a lot of manpower. We have developed a museum guide creation system that facilitates the video guide production for the museums. The video guide creator uploads the script to the server, and the visitor accesses the script with a smartphone application and views it as a CG animation. We use an existing Content Management System - "Wordpress" for the script input. With this system, museum guide creators simply access the blog site by Wordpress and post an article which is in this case a scenario of a museum guide. As for the next step, we plan to make our museum guide creation system open to the public, and will allow many people to create and register movie guide scripts voluntarily just like the Wikipedia.
We have been studying and developing the real-time Computer Graphics (CG) based virtual museum where a user can walk through to appreciate artworks digitized in high-resolution. Our virtual museum also has a function to automatically create TV program-like CG animations using 3D CG models in the virtual space as it is so that the user can learn about individual works by watching the art shows. The CG animation is produced with TVML (TV program Making Language) engine implemented on the virtual museum. However, the current problem is that it requires a lot of work for a developer to write the complicated TVML scripts manually Therefore, this time we have developed a special tool to help the developer to prepare the TVML scripts easily. With this tool, the developer can produce the TVML-based art program simply by writing out a simple scenario on an ordinary text editor. In order to design this tool, TV art programs actually broadcasted are analyzed to determine the syntax of the simple scenario. Based on the analysis, we have developed the tool with TVML engine working on the Unity game Engine. We have also used this tool to imitate the broadcasted TV program to validate its usability.
During the years 1795–1866 the Swedish national art collection, today’s Nationalmuseum, was on display at the Royal Palace in Stockholm at what was known as Kongl. Museum. This museum consisted of two sculpture galleries adjacent to the palace garden Logården and a paintings gallery and a few more rooms on the second floor. While the sculpture galleries are well known as well as reconstructed in situ, there has been much less research on the display of paintings at the museum. In the cross disciplinary research project »Virtual Museum at the Royal Palace» we are using a digital 3D model to reconstruct the display of paintings in the so-called smaller gallery of the palace, as it was displayed during the period. The reconstruction deals with two different hangings of the gallery, in 1795 and c. 1843, made by the curators Carl Fredric Fredenheim and Lars Jacob von Röök respectively. Our preliminary findings show that, contrary to earlier claims, the two hangings are rather different and constructed on quite different ideologies of museum display, something that is possible to see thanks to the method of using digital 3D-models as a basis for analysis.
A system is presented for the algorithmic placement of, for example, artworks in a gallery, based on an arbitrary number of predefined categories by which each artwork can be numerically appraised. By the provision of a weight value for each category set by a visitor, a personalized experience is thus created that effects the placement of the artworks. To verify the system, a 3D simulation environment was built to enable the visitor to experience the personalized art gallery. The algorithm for artwork placement was based on a simplified version of a nearest insertion algorithm, providing for an approximate solution of the traveling salesman problem, with the objective to either minimize the differences between the exhibited artworks along the path, or to invoke a predefined amount of variation between the artworks.
I have been conducting research on the method of generating contents by computer. This time, the automatic generation of rap music video from a script and the slap-stick effect generation by the destruction of neurons in the neural network, are described in detail from the recent attempts. The former is in the direction of completion and the latter is in the direction of destroying it. This paper describes both concepts of creation and destruction, and the construction of complex systems which are equally important in art, and tries to give a guideline of art generation by computer.
We have been building a keyword-based virtual museum that allows users to search a database of artworks by giving search keywords, pick up the artworks, and automatically display them in a CG museum space of variable dimension. This time, we have added an annotation system to this museum, allowing users to freely annotate the artworks while enjoying the exhibition. In this way, the database of keyword search is naturally updated, and the descriptions of the artworks freely viewed in the virtual exhibition is elaborated over time. In this way, while enjoying the exhibition, the museum itself becomes richer and evolves on its own, which we call the Growing Museum.
The rise of AI technology, especially neural networks, has earned significant attention as a powerful tool capable of solving a wide range of problems. However, I am intrigued by the idea of utilizing neural networks in a less practical direction. Specifically, I aim to simulate the aimless walk of a fly on a window glass using a neural network. Instead of relying on random values, I have implemented a technique called "neurodrug" where I deliberately destroy neurons in a neural network to generate unpredictable behaviors. Through this experiment, I delve into philosophical questions related to purpose, decision-making, randomness, motivation, intelligence, and instinct.
Introduction of the studies of "Text-To-Vision", "Virtual Museums", "Neurodrug".
We propose a method to possibly generate slapstick effects by deliberately cutting out, tampering and re-connecting neurons in the neural network which has been trained in a proper way. We have developed two applications of a drumbeat generator and a CG character animation as the experiments to try out the idea. The results look interesting and show a certain potential for entertainment and also a viable way of interactive art.
The recreation of past art exhibitions held at the Royal Palace Museum in Stockholm in 1795 and 1843 was made possible by virtual museum technology using real-time computer graphics. This was a collaboration between art historians and a computer graphics development team. The goal was to define a data format for the exchange between these different fields, so that art historians themselves could be directly involved in the CG reproductions. This time, a text file describing the meta-information on the artworks and the exhibition location was used as an intermediary, allowing researchers in the humanities to construct their own CG museum exhibits. In this paper, we introduce our attempt to use this method and discuss it from various perspectives.
We have been studying virtual museums and have developed several new systems with real-time 3D CG. Those are capable of providing different types of user experiences by giving users: 1) 4K/8K ultra high-resolution display to show the virtual exhibition, 2) realistic museum architecture models with artifacts digitized both for planar paintings and 3D objects in high-resolution, 3) TV-documentary type of CG animations created in the virtual 3D CG space with spoken commentators, 4) user customizable museum where user-specified-images are automatically displayed in the space, and more. Through those various attempts, we have been seeking virtual museum without spoiling the very basic user experience, those are the atmosphere, the mood and the ambience when we are in a museum space. I will talk about those research and development results with many working demos on a PC.
ゲームというジャンルがカバーする範囲は広大です。コンピュータサイエンス、ビジュアライゼーション、ユーザーインターフェース、芸術、社会科学、心理学、教育、など多方面の知識の集合としてゲーム開発がなされています。同時に、ゲームほど新しい分野はないかもしれません。その歴史は古いですが、それが、ある学術分野を形成するかもしれない、という風になってきたのはつい最近のことです。私はスウェーデンのウプサラ大学のゲームデザイン学科で、ゲームに関する研究教育の仕事をしています。今回、その中で、ゲームに関係する研究をいくつかご紹介したいと思います。私が直接手がけている、テキスト台本からCGアニメーションを自動生成するTVMLのゲームエンジンUnityへの展開によるゲーム応用、同エンジンで構築したバーチャルミュージアムにおけるLean-forwardとLean-back共存の試み、そして、脳計測をゲーム開発に応用する仕事の紹介などを通して、科学と芸術の関係性のお話などもしてみたいと思います。
Virtual Museum service has been carried out in many places owing to the advanced video and network technology in recent years. In the virtual museum, people primarily experience the prepared content actively with a mouse, a touch panel and specially designed tangible devices. On the other hand, in a real museum space, people appreciate the artifacts passively walking around the space freely without stress. It can be said that the virtual museum is designed to urge people to deal with it rather actively when compared to the real museum.We have been studying and developing a new type of virtual museum enabling people to participate the space with both active and passive way, by implementing various new functions. In this time, we developed the new virtual museum equipped with a video content generator using the virtual exhibition space modeled with 3D computer graphics (CG). This video content is created in real-time by using the 3DCG-modeled museum space as it is, adding appropriate visual and audio effects such as camerawork, superimposing text, synthesized voice narration, back ground music etc. Since this system is working in the 3DCG space, a user can easily go back and forth between the two modes of watching the video content passively and doing walkthrough in the space actively by a wheel mouse.In this paper, we first introduce primary virtual museums in the world. Then, we describe our method: 1) specially designed walkthrough algorithm, 2) the video content generator using the 3DCG museum space and 3) seamless integration of the 1) and the 2). We then describe our functioning prototype followed by the conclusion and the future plans.
We have developed an application to produce Computer Graphics (CG) animations in TV talk show formatsautomatically from the Internet forum. First, an actual broadcasted talk show is analyzed to obtain data in regardsto camera changes, lighting, studio set up, etc. The result of the analysis is then implemented into the applicationand a CG animation is created using the TV program Making Language (TVML). The application works in theUnity game engine with CG characters speaking with computer generated voices. We have successfully created aCG generated TV talk show which allows users to "watch" the TV show format generated from the text informationcoming from the forum on the Internet.
We have developed an application with which a userenters a query to search favorite artworks, it then downloads theartwork data on the internet, and automatically displays them ina virtual museum with real-time 3D computer graphics where theuser can freely walkthrough and enjoy. This time, we built theapplication using the service of the Metropolitan Museum of Artreleasing more than 400,000 pieces of artwork data with metadatafor free. By doing this, we have made a concrete step toward ourfinal goal to produce flexible and personalized virtual museums ina procedural method.
Web サイトをはじめとする様々な情報をテレビ番組的なCG アニメーションに自動変換する研究開発を進めている.今回,その試みの一つとして,「2ちゃんねる掲示板」からコンピュータ・グラフィックス(CG) アニメーションを自動生成するアプリケーションを開発した.基本的な方法は,実際のテレビ番組映像を分析し,そこで使われている制作ノウハウを抽出して,これをルール化および数値化し,ソフトウェアに実装することで,テレビ番組を真似たCG アニメーションを得るというものである.今回,実際に放送された1 時間分の討論番組映像のカメラスイッチングを解析しアルゴリズム化した.本論文では,このプロセスの詳細を説明し,この方法により作成した実際のアプリケーションについて述べる.また,得られたCG アニメーションについて評価実験を行い,本手法の有効性と今後の課題を明らかにしたので,これについて述べる.
We have developed a Virtual Museum with real-time 3DCG capable of exhibiting arbitrary planar artifacts such as paintings which have been specified by a user. The pictures are collected from the Internet sites such as Wikimedia Commons by giving bookmarks by the user. The artifact images are displayed in its life size automatically aligned on the wall of the museum with picture frames and generated captions. This process is done based on metadata extracted using a technique called Web scraping to extract necessary information from the target Web sites. The museum space is realistically modeled with high resolution and sophisticated illumination where the user can walk through in the space. The system enables the users to create their own personalized museums with their favorite pictures exhibited in the realistic way.
We have constructed a circulation system based on a proposed format of exhibition data in the virtual museum. The virtual museum is built with real-time computer graphics that a user can walk through and see displayed artworks. The circulation system of artworks and museum spaces is built on the internet which is similar to that of the e-book. We have successfully established a virtual exhibition system fulfilling the requirements to be a medium. The working system that we have developed is described.
This paper describes the implementation of the Text-Generated TV that we had previously proposed. It uses textscript to circulate on the network and a user can watch TV program videos with a specially designed playerconverting the scripts to computer graphics animations. We have developed the Player prototype in Unity gameengine for viewers and deployed the Text-Generated TV broadcast station on the server where actual contentsare ready to view. The software is downloadable and a user can actually watch TV with the player on a PC.
We have been studying and developing the systemwhich enables to generate Computer Graphics Animation (CGA)automatically by processing HTML data of Web site. In thispaper, we propose an open framework to facilitate this. Theframework is functioning all at a server side, obtaining theHTML, converting it to a script describing the CGA story andupdating the script. And at a client side, a user accesses the scripton the server to visualize it by using real-time CG character withsynthesized voice, camera work, superimposing, sound fileplayback etc. We have constructed the framework on the serverand deployed the substantial engines to convert Web sites toCGAs. This paper describes the detail of the framework and alsoshows some example projects providing automatically generatedNews show, Talk show and personal Blog visualization.
We propose a new television system based on a methodology which circulates a text-based script on the Internet representing visual content instead of transmitting complete video data. This Text Generated TV is realized by the technology called T2V (Text-To-Vision) enabling to create TV-program-like CG animation (CGA) automatically from script. Our new TV system is made by integrating the research results of T2V technology that we have been studying. The new TV system provides Use-Generated-Contents, automatic generated CGA from text sources available on the Internet, and interactive video game like applications in TV context. The Text Generated TV is regarded as one of the object-based content representation. Hence, it has a lot of possibilities and flexibilities and we believe that it has a big potential toward the future TV system. In this paper, we introduce our concept and a prototype development and discussion on new aspects of our approach.
We have developed an application that enables ateacher to easily create video lectures by simply writing a script. ACG character speaks with voice synthesis while showing slidesexplaining them with subtitles. The teacher just needs to write thespeech lines and simple commands in a text editor, and the CGanimation is automatically generated by the application. This time,both the CG lectures and the actual lectures were provided tostudents in an actual university course, and a survey wasconducted to evaluate our method after the course. The resultshows that the CG lecture was working well, but there were a fewstudents who were not satisfied with it.
We have been developing a new type of Virtual Museum which enables users to participate in the space with both active and passive modes of operation. In the "active mode", the new virtual museum provides a user walkthrough using the realistic 3DCG-modeled museum space and artifacts in the space. And in the "passive mode", the system adds desired visual and audio effects such as camerawork, superimposed text, synthesized voice narration, post production processes, background music and so on to give users a TV commentary type of CG animation. Users can easily transition back and forth between the two modes of doing walkthrough in the space actively and watching the video content passively. This paper describes the details of the system design and the implementation followed by a discussion on the functioning prototype.
T2V (Text-To-Vision) is the technology which enables to generate TV-program-like CG animation by computer from a given script. In this time, we have developed a system development kit (SDK) which makes it possible for developers to create various interactive applications in the Unity with utilizing the T2V technology. We first explain the SDK and its usage. Secondly, we introduce two applications made using SDK: 1) Automatic generation of talk show from a bulletin board in the Internet, 2) Interactive quiz application with multi-story structure.
We have made an application to make rap music video with CG animation by writing out a simple script. Aquestalk and TVML (TV program Making Language) are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with the music beat in the script.
There is a wide variety of information on the Internet, and it is vital to organize and disseminate the information that people want to know based on this information. Expressing the data in a way that people can easily understand is an important task for the media. We have developed a system that automatically converts WordPress blog entries into CG animations. By incorporating the required functionality into a WordPress theme, users can choose this theme and build a blog where readers can watch CG animations instead of reading blog entries. CG animation is created by TVML (TV Program Making Language) technology that enables to create a TV program-like animation with CG characters with synthesized voices, image data display, sound playback, and superimposing, etc. In addition to contributing to theWordPress community, which is expected to be an auxiliary function of blog services in particular. It may also function as a production tool for creating various types of CG animation, not just blog support. In this research, we have developed a functioning software application working on a PC and an Android hand-held device to visualize the blog entries. the WordPress is specifically used for our purpose as the blog system in this research. We successfully showed this in several exhibitions and also conducted a web survey to evaluate the system. This paper describes the proposed system in detail and the evaluation test.
We have developed a blog system with Wordpress which has a capability of automatically converting its blog entries into CG animations. By embedding necessary functions in Wordpress' s theme, a user can select this theme and construct a blog where readers can watch CG animation instead of reading the blog entry. The CG animation is created by TVML (TV program Making Language) technology which enables to make a TV-program-like animation with CG characters with synthesized voices, image data display, sound data playback, superimposing and so on. Not only to contribute to the Wordpress community, in particular, expected as an auxiliary function of the blog service, but also this could work as a production tool for creating various types of CG animations rather than blogging assistance.
To promote UGC (User Generated Content) on the Internet, several techniques have been developed to allow users to createCG animations only by writing scripts quickly. TVML (TV program Making Language) is a technology capable of obtainingTV-program-like CG animation by writing text scripts. This paper aims to propose an application that automatically convertsblog posts into CG animations with news show format with the aid of TVML. The process is: 1) to fetch HTML of the blogposts and perform web scraping and natural language processing to obtain summarized speech texts, 2) to automatically givea show format received from the analysis of professional TV program to get TVML script, 3) to apply the CG character andartworks, etc. that fit the blog content to obtain the final CG animation. In this paper, we describe the process and explainthe application that we developed based on the method, and explain the evaluations of the outcome produced from the blogposts.
TVML (TV program Making Language) is a technology capable of obtaining TV (television)-programme-like Computer Graphics (CG) animation by writing text script. We have originally developed TVML and have been studying generative contents with the aid of TVML. This time, we have created an application that automatically converts blog posts into CG animations with TV news show format. The process is: 1) to fetch HTML of the blog posts and perform Web scraping and natural language processing to obtain summarized speech texts, 2) to automatically give a show format obtained from the analysis of professional TV programme to get TVML script, 3) to apply the CG character and artworks etc. that fit the blog content to obtain the final CG animation. In the demo session, we will explain the method and will demonstrate the working application on a PC connected to the Internet showing CG animations actually created on site.
We propose an automatic news broadcasting system, which generates full-CG animated news-shows fromoriginal text formats from the Internet. This paper introduces the overall system and provides a feasibility test and workingmodel of the news show application generated from a HTML Internet news site. We also describe the future collaboration work.
We have been developing visualization systems that allow people to appreciate artworks in the various environment they are in. This time in particular, we focus on Japanese woodblock print (ukiyo-e). People in the Edo period nurtured their artistic mind by holding ukiyo-e prints in their hands and enjoying them under various light conditions. With this in mind, we have developed a handheld display system that can display a piece of ukiyo-e following the change of ambient light in real-time. The purpose of this attempt is to identify what it means for a person to appreciate art of ukiyo-e in the natural environment. This paper describes the system using a display and an ambient light sensing device, and a color transform based on the spectral digitized data of the picture. It also reports the results of a preliminary test using a tablet PC-based simplified version for proof-of-concept.
We propose an automatic and intelligent news broadcasting system, which generates full-CG animatednews-shows from original text formats from the Internet. The news broadcasts are delivered to users on multipleplatforms in the language of their choice. Users are also provided with interactive and intelligent news services.This paper introduces the overall system and provides a feasibility test and working model of the news showapplication. The example shown is generated from a HTML Internet news site. We also describe the method ofconstructing a practical system. The future plan of action is to conduct large-scale experiments and field testsand thereafter implement research outcomes into the standardization of the next generation TV broadcast system.
We propose an 'Ultra-CG project' which promotes an extreme high-definition real-time computer graphics system with a resolution of 4K and 8K (Super Hi-Vision) which is more than the conventional HDTV. It is important for the project to study not only hardware and software requirements but also content creation methodology with which the content is displayed on extreme high-resolution display. We have first developed a functioning test system, which exhibits 'Virtual Museum' in 4K resolution to assess validity of our approach and clarify the tasks toward the further application development. The system consists of a PC with high-speed graphics cards and a 4K monitor. The real-time 3DCG software and all the CG models are built on Unity, which is a 3DCG game engine, used worldwide. We are now considering various feasible applications built on the system such as exhibition, entertainment, medical use and more. This paper describes the test system, discusses the future applications and collaboration of content creators and system engineers.
We have researched and developed a 'Virtual Museum' in an extreme high-definition real-time computer graphics system with a resolution of 4K and 8K (Super Hi-Vision). We have first developed a functioning test system, which exhibits Japanese artifacts 'Ukiyoe and pannel' in 4K resolution. In our system, the artifacts have been digitized in ultra high-resolution then positioned in a high-quality modeled exhibition space. A user can walkthrough in the exhibition space enabling to view the artifacts in a distance and also to get closer to observe its detailed surface seamlessly. With this method, we have successfully enhanced both sense of being there and sense of realness. In this paper, we first survey several virtual museums in practical use, then explain the detail of our system and introduce experiment results with discussion in comparison with the existing virtual museums.
We have been developing T2V (Text-To-Vision) technology which enables to produce CG animation from given script. We have developed the application called 'T2V Player' and have been distributed it as freeware for years. The application works on Windows PC to produce TV-program-like animation from user input text using real-time CG and voice synthesizing technique, etc. In this paper, we introduce the prototype of 'T2V on UNITY' which has been developed from scratch on the UNITY game engine. We succeeded to enhance its function owing to the UNITY, such as multi-platform, availability of CG character data circulated on UNITY community, capability of applying T2V method to game development and more.
We have made an application to make rap music with CG animation by writing out a simple script. Aquestalk and TVML are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with music beat in the script.
We are conducting research and deve lopment to achieve perfect color reproduction of paintings by display. Thistime, using Hyper spectrum, we have constructed a display system that takes into account ambient light, monitorcharacteristics, and visual chara cteristics of the human eye. This p aper also includes miscellaneous thoughts on how to thinkabout art works when treating them as technical targets.
We have been developing real-time 3DCG based virtual museums where a user can freely walk through. One of the problems with the virtual museums is that visitors tend to stay shorter and leave earlier. To solve this problem, we propose a new method of incorporating an entertainment element into the museums. The museum is divided into two modes: a normal walk-through viewing mode and a game mode, which allows a visitor to move seamlessly between the two modes in the same space. The virtual museum space can be used as it is, where a visitor can seamlessly transition to the games such as an adventure game and shooting game. If the user gets bored, they can immediately go back to viewing mode. This is a mutation of various amusement facilities in real museums. We implemented the above idea and will conduct the evaluation experiment and discuss its usefulness.
This paper explains and details an automated TV News Show program, using the Text-To-Vision (T2V) technology. Today, 3D CG environments are more and more often used, even in the classic media like TV. However, there is not any fully virtual TV News Show coming yet, staring only virtual characters with being completely automated, using news source available on the Internet. We made it possible to create this kind of automatic news show system owing to the T2V, with interactive avatars, facial expressions and multiple modular and dynamic scenes.
本研究では、コンテンツ制作技術の1つとしてWebコンテンツの再利用を目的に,Webニュース記事を情報源としたクイズコンテンツの自動生成を試みる.これまで行われてきたクイズコンテンツ自動生成の関連研究は主に分析的方法によるものだが、本研究は,元となる文章から人間がクイズを作る時の思考を追ってコンピュータにシミュレートさせる発見的手法を用いている.発見的手法によりクイズの自動生成が可能であるかを実験で示し,考察を行う.
T2V(Text-to-Vision) a technology which is capable of automatic animated movie generation to assist individuals who do not have special knowledge about animation production. This paper shows improvement of the function that animates BBS (Bulletin Board System) with this technology. T2V has a package (2ch convertor) that is capable of animating “2channel” (largest BBS in Japan). The present 2ch convertor, however, does not support dialogue situation based on quotation marks. Therefore, it causes a problem that it cannot produce animation with dialogue. In this paper we propose a method of animation production regarding the conversation structure in BBS to create more natural expression in the animation.
Conventional audio system is constructed based on the communication theory where the performance of thesystem is described by signal error which does not always represent a true sound quality. In order to make anaudio system which represents true music admirations in one's mind, we have taken a special approach. Since itis considered that true admirations are represented by adjectives, we have obtained two principle phrases byperforming the multivariate analysis on the whole Japanese adjectives. Based on the methodology, we havediscovered necessary physical electric characteristics with which we have successfully achieved a new AudioSystem.
『今まで耳でしか聴いたことがなかった音楽を今回,身体全体の細胞で聴くことができました.透き通った音色は生きており皮膚を撫でて行くようです』の状況は、“こころが感じる”で、芸術と科学の融合を掲げる芸術科学会の基礎テーマであり、正面から対応する。この状況をオカルトとして対応しない大多数の日本の自然科学者を標榜する人も納得させうる、哲学的に客観性を証明した、新・電気音響論である。 それは、客観評価尺度を定義し、人のこころに感動を喚起する高度感性情報を伝える音の音響理論であり、実際に装置を研究開発した。物理歪量のみの定義のみで音質を言及しない従来ハイファイとは別次元の音響理論である。
Automatic generation of TV-program like CG animation (CGA) has been studied. We have developed appropriate show formats applied to the dialogues extracted from the bulletin board. Evaluation tests are performed by comparing the original text contents and the converted CGA to identify the advantage of our method
We are investigating a system that provides a computer graphics (CG)-based television program to viewers by sending a script to the terminal using CG and voice data. In this study, we assess the viewer satisfaction of CG programs produced using a text-based TV program making language (TVML). To verify whether CG programs can be used as a substitute for real programs, we conducted a comparative evaluation experiment between real and CG programs. In addition, to verify the best device for CG program viewing, we compared the same CG program viewed on a PC display and a smartphone. The results suggest that CG programs are acceptable substitutes for real news and information-related programs, and that smartphones might be more suitable than PC displays for viewing CG programs.