Create an colorful and human-centered living environment
Interaction can take many forms. In our laboratory, we mainly focus on human-to-human interaction, human-to-non-human interaction, and non-human-to-human interaction via artificial systems (non-human means anything that can interact with non-humans, including machines, robots, animals, etc.). However, as the name of our laboratory indicates, we study systems that apply interaction technology, so the scope of our research is very broad, and there are no restrictions on our research topics.
Research themesIntroduction of on-going research topics in the AIS lab. |
||
Long-term Marine Observation System (BIWAKO-X) Water pollution, ecological changes, and other aquatic issues in oceans and lakes are attracting attention. Many studies have attempted to solve these problems by using sensing devices that can monitor water environment data. A symmetrical-shaped autonomous surface sensing device for long-term monitoring of water quality has been developed by our group. The sensing device can move in arbitrary directions on the water surface and keep its position in the practical field with disturbances. This research aims to solve these problems by using sensing devices as an alternative to human observation. We are studying efficient position-keeping strategies in practical fields with disturbances, assuming in-situ observation scenarios. |
||
Open Set Learning Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. Open set learning (OSL) is a more challenging and realistic setting, where there exist test samples from the classes that are unseen during training. Open set recognition (OSR) is the sub-task of detecting test images which do not come from the training. |
||
Visual Information Acquisition Theory In recent years, the amount of information we need to handle in our daily lives has increased dramatically. People obtain most of the information from visual information to make decisions about their actions. In such a world, we believe that supporting visual information acquisition will be useful for many people. In this study, we establish a method for estimating a person’s visual state based on the relationship between dynamic changes in the visual environment and eye movements. We link understanding a person’s internal state through visual estimation to interaction support in actual situations. Through experiments, we will examine whether the results of visibility estimation can be applied to various hypothetical situations, and we would like to use the results in the future when people select and choose information in their daily lives. |
||
Ceiling/Wall Moving Mobile Module (MoMo) Our laboratory has conducted research on intelligent spaces (iSpace) that can provide various services to humans in the environment using various sensors such as cameras embedded in the environment. In the R+iSpace, various devices are mounted on a Mobile Module (MoMo), which can move freely on the wall or ceiling surface. The MoMo can relocate not only devices but also lighting, paintings, room partitions, etc., to create a more comfortable space. The aim of this research is to develop MoMo 5.3, which enables wireless communication between multiple MoMo units, and an application that allows users to remotely control MoMo. |
||
Drone-based Night Surveillance System Human security services are known as a burdensome occupation, requiring long working hours and night shifts. In addition, the security industry is facing a serious labor shortage due to the increase in the number of new facilities. For this reason, various types of security robots, such as wheeled robots and drones, have been used to provide security in recent years. However, there are some problems with wheeled robots, such as their inability to deal with obstacles such as steps and people, drones’ lack of nighttime security, and their inability to interact with people who have lost their way. The Aerial Ubiquitous Display (AUD) is a security drone that can provide wide-area security from the sky, unaffected by obstacles such as people or steps. The AUD is equipped with an infrared camera to enable nighttime security. The camera is used to detect human behavior and approaches detected humans. The drone is also equipped with a small projector to enable human interaction with the drone by reading the information projected by the drone. When completed, this system will be able to replace security guards patrolling the campus, reducing labor costs and workload for security guards. |
||
Intelligent Space The Intelligent Space (iSpace) is a space that can analyze human behavior through computer vision and present user-centric information, and autonomously control robots as users intend. This study aims to support humans and robots to enhance their performances informatively and physically. To develop such a smart space, iSpace needs to know “when, where, who did what”. So we recognize human gestures, facial expressions, and human-object interactions and integrate them into the human state. By mapping the human state to an environmental map, iSpace can learn information like user’s interest and utilization of space. The final goal of this research is iSpace inference better information presentation and robot control using the space data. UDの詳細な紹介 |
||
Ubiquitous Display The Ubiquitous Display (hereinafter referred to as “UD”) is an Omni-directional moving platform equipped with a pan and tilt mechanism and a projector. The goal of the UD is to realize a new type of information support using robots by communicating with people through interaction using projected visual information. To date, we have researched building guidance systems in indoor environments that utilize the features of this UD, automatic correction of distortions in projected images, naked-eye stereoscopic projection methods that use optical illusions to give a pseudo-three-dimensional effect, and projection methods that attract the attention of users. In addition, there are problems with conventional UD in that the number of people who can receive information support is small, and the amount of information support is limited. Therefore, we are currently researching and developing a Dual-Ubiquitous Display (hereinafter referred to as “D-UD”), which is equipped with two projectors to expand the functions of the conventional UD. In this way, it is possible to provide information support to two people simultaneously. Moreover, the projection range is increased, and the projection pattern is diversified. Thus, it is expected to provide more efficient information support. |
||
Ubiquitous Delivery On-demand Robot (UDOn) Most of the delivery robots mainly move on sidewalks to transport goods from warehouses to the entrance of buildings. However, the robots that can move between floors in a building are limited. In other words, existing robots are able to deliver the goods to the entrance of an apartment building, but it is not easy to transport them to each room. Currently, some robots that can move between floors by using an elevator have been developed. However, they are not able to adapt to the buildings which do not have the elevator. For these reasons, in our study, we propose a delivery crawler-type robot, named Ubiquitous Delivery On-demand Robot (hereinafter referred to as UDOn), which is able to climb up and downstairs. The robot is developed under three design concepts: low development cost, simple mechanism, and small size. More specifically, the robot consists of two parts: a moving mechanism and a tilting mechanism. |
||
Personal Mobility In recent years, much attention has been focused on the research and development of environmentally friendly Personal Mobility (PM). The PM developed through basic research are robots with universal design and usability. There are few PM equipped with autonomous driving. Therefore, we believe that the implementation of autonomous driving will lead to a reduction in the burden of user movement. In addition, by performing cooperative control by humans and the system, it is possible to make driving that matches the characteristics of the user and the driving environment. In this study, we aim to provide convenience in moving wires through autonomous driving of PM and cooperative control that combines control by humans with control by the system. |
||
Gamification in tourism Many businesses in tourist and commercial areas are concerned about how to increase the number of customers, or how to increase the number of people who actually visit attractive places, despite the fact that there are many attractive places to visit. Thus, there is a need for technology that encourages users, such as tourists, to walk around and visit various places. In this study, we aim to increase the enjoyment of walking around and encourage users to take the actions we intend by using gamification. The game consists of presenting an image of a certain location to the player, and having the player take a picture of the location with own camera. We will develop a game that detects the similarity of the captured images and awards a score. As described above, we are developing a game that can promote the actions of users who play the game. |
||
Metaverse The concept of a metaverse, in which users interact with others in a networked three-dimensional virtual space, has been attracting attention in recent years because of its compatibility with entertainment using VR goggles. However, the creation of this 3D virtual space requires a large amount of cost. I have therefore thought that if real space can be used to construct the virtual space, even those with no knowledge of 3DCG can easily reconstruct the 3D virtual space. One of the advantages of using real space is that users can enjoy moving around in spaces that are normally unavailable to them, such as under the floor or behind the ceiling. In this research, we aim to construct a new metaverse where users can interact with each other through their robots by using robots that move around in the space as avatars of users while measuring the structure of the real space. |
||
Hydralic powered leg for patient robot One of the problems that can be cited as an issue in Japanese society is the aging of the population. The number of caregivers in demand far exceeds the supply, and there is an absolute shortage of caregivers. In order to solve these problems, this study aims to further develop the care training robot developed in previous research, and to improve the existing robot to make it easier to handle and to enhance its performance. (User Interface) uses a pneumatic cylinder for the joint part. In this study, a water hydraulic cylinder was used to maintain the tactile sensation because the air pressure changes when the knee joint is bent, and the tactile sensation changes. The purpose of this study is to realize a robot for care practice that is similar to the human body. The robot will be used in the future as a training method for students at caregiver training schools who are not able to practice caregiving movements sufficiently, and we aim to create an environment that facilitates practice. |
||
GAN-based AI Make-up supporter for the visual impaired People with visual impairments have difficulty applying makeup on their own. Currently, blind makeup classes are being held so that visually impaired people can makeup properly. However, when they apply makeup alone, there is concern about whether the makeup is properly applied or not. To solve the problems mentioned above, we propose a makeup check system using computer vision and machine learning. Firstly, we started the detection from lip cream. By using GAN (Generative Adversarial Neural network), predict the correct shape of the lip. Comparing predicted shape and current shape of lip, detect the region lip cream applied incorrectly. |