VIS / Vision Assistance Race

W-Visions

JapanJapan

About the Team

The team W-Visions was formed within the Intelligent Mechatronics laboratory at Wakayama University, which is researching intelligent mechatronics technology to make human life more comfortable.

Our team consists of 3 members, the pilot, the team manager and a master student. By exchanging opinions between team members or getting contributions from other lab members, we are developing the device from diverse points of view.

We are creating a user-centric system by incorporating the diverse opinions of each member on topics such as robot control, deep learning and image processing, and improving the system based on the pilot’s opinion.

Our system is structured very simply, and by incorporating the pilot's intuition we can create a user-friendly system.

About the Pilot

Toko Momokitani is 18 years old. He is a first-year student at a Japanese university. Since he was born with small eyeballs and no vision, he is unable to sense environmental light or shadows.

Toko has a strong desire to improve and a very proactive personality, so he gives us a lot of useful advice for device development. He is also enthusiastic about other activities, making him an indispensable member of the team and constantly raising our motivation.

About the Device

Our device uses a camera to look down on the pilot's surroundings to efficiently grasp the environment around the pilot. This is achieved by attaching the camera to the end of a stick and facing downwards.

Our device uses Intel's RealSense D455, which not only captures images with a wide angle of view, but also acquires depth information. With a wide range of environmental information and depth information, it can effectively detect obstacles and objects of interest and efficiently communicate this information to the user.

Our system is built using ROS2. This allows multiple programs to be combined in a complex way to form one system, while still achieving a simple UI. Image information is appropriately processed using deep learning and other techniques, and the information is appropriately conveyed to the user through audio output.

End of page: Go to top of page