Co-MLOps initiative and Edge.Auto take center stage

TIER IV PEOPLE shines a light on the people and teams whose unique experiences, backgrounds, and stories bring our mission to life. In this installment, Kazunari Kawabata and David Wong reflect on two big announcements at CES 2024, the launch of Edge.Auto and the Co-MLOps initiative.

An alumnus of the University of Tokyo’s Graduate School of Science, Kazunari joined TIER IV in 2019. He has served as product owner of Edge.Auto and is currently VP of the Future Solution team. In a previous role, he oversaw the development of CMOS image sensor technologies and products, and steered new business initiatives in North America.

New Zealand native David joined TIER IV in 2019. With a PhD in Information Science from Nagoya University’s Graduate School of Informatics, David leads the sensing team and contributed to the development of Edge.Auto.

― Can we kick things off with a brief description of Edge.Auto

Kazunari:TIER IV has three main products: Pilot.Auto, an autonomous driving software platform based on Autoware, and Web.Auto, a DevOps platform. The third product, Edge.Auto, is a platform that integrates hardware such as electronic control units, cameras, LiDAR, and other sensors, along with the software needed to operate them, into a reference kit. Developers can gain a head start with Edge.Auto as the platform’s hardware and software have been put through their paces during TIER IV’s development of autonomous driving technologies.

TIER IV’s main products comprise three platforms for autonomous driving

The platform comprises two solutions: automotive cameras and a sensor fusion system. The camera solution stemmed from the development of the C1 and C2 cameras. These devices, which are currently available for purchase, serve as reference cameras in numerous autonomous driving projects worldwide. More than 150 companies employ the cameras, and student teams have used them in competitions such as NHK’s robot contest. The cameras have also been utilized in research, such as in the development of the quadrupedal ANYmal robot and a project led by Yoichi Ochiai under the MITOU program for the next generation IT innovation leaders.We’re also making progress on the development of the C3 camera, which boasts a resolution of 8.3 megapixels.

Until now, high-quality cameras suitable for automotive use have been difficult to obtain in the general market. We've made them accessible to anyone. It's important to emphasize this point. Currently, they can be purchased through distributors and also via e-commerce sites such as Switch Science and RT. I believe this is a significant step toward making autonomous driving accessible to all, which is one of TIER IV's goals.

TIER IV’s C1 and C2 cameras

The sensor fusion solution was proposed to support the development of recognition functions by combining multimodal sensors such as cameras and LiDAR. It can be utilized for selecting sensor hardware and spatial calibration between sensors, as well as time synchronization. Each of these requires a significant amount of expertise. TIER IV provides support through open-source tools and documentation, leveraging years of experience in the field.

David: One of the important features of Edge.Auto, is the adaptability to different configurations. We work with a variety of LiDAR and ECU manufacturers. As well as being involved in camera development, we also develop camera drivers that can run on different ECUs.

At TIER IV, we're developing open-source software. As a member of the open-source community, we make an effort to accommodate different sensor configurations and different sensors. A few years ago, the sensing team started an initiative dubbed the Next-generation Sensor Suite, where we consider not the sensors that we have now, but the sensors that will be available in the future. I think this is another point that sets Edge.Auto apart.

ー Do you anticipate interest from sectors outside of the automotive space, such as agriculture or mining?

David: For Edge.Auto, yes. Because the problems that those industries face in automation are actually very similar to the ones in autonomous driving: challenges such as low-level sensing, calibration and perception. Edge.Auto is good from that point of view because it offers cost-effective solutions to overcome those problems. This is particularly true when you're looking at edge-based perception as part of a larger autonomous driving system.

Kazunari: Our products can be modified to fit the requirements of customers in a range of industries.

David: In industries like construction, for example, you don't necessarily have to worry about pedestrians, road laws and all the legal requirements that come with carrying human passengers. So, it's a lower barrier to entry.

One of the main focuses at CES was showcasing Edge.Auto. We were able to talk to customers about the product, our automotive cameras, and LiDAR integration, and therefore also gain an understanding about some of the requirements from other industries.

Kazunari: TIER IV's booth was divided into three sections. One section hosted the Shuttle Bus, alongside introductions to TIER IV’s Level 4 autonomous driving white-label solution, “fanfare,” Pilot.Auto and Web.Auto. The opposite side featured demonstrations and displays of Edge.Auto products, and the central area served as a meeting space for partners.

The Edge.Auto team ran four live demos at the booth. One featured LiDAR products by three different manufacturers, with the same software used to create a unified perception system. That’s part of what makes TIER IV unique. We also ran a demonstration of TIER IV's low-power edge AI technology at Hailo's booth in another venue. So in total, we conducted five live demos.

David: We increased the number of demos this year. It was a challenge, but to be honest, it was enjoyable. We only had two engineers on-site for the Edge.Auto demo, and we didn't know if we were going to get everything working in time, but it was great to sit down with one of the best engineers I know and just build stuff.

It was stressful at the time, but I think we got a lot out of the experience. While we were at CES, we were thinking, “We're never doing this again!” However, there was a real sense of achievement once we got everything working, even though it meant being at the venue at 2 a.m. after a team dinner trying to fix a calibration issue. It was great to have the opportunity to build something and then show it off to visitors at the booth. The positive reactions were really satisfying.

ー What about Co-MLOps, how did that initiative come about

Kazunari: A common challenge across the autonomous driving industry is large-scale data collection. Industry leaders can amass vast amounts of data internally, but many other companies lack the capability to collect equivalent volumes on their own. The development of commercial-grade autonomous driving AI requires vast amounts of high-quality data. As a result, the few companies with large data pools have been leading the technological advancements in the industry. This situation goes against TIER IV's vision of democratizing autonomous driving, as it concentrates the development within a subset of companies equipped with extensive data resources.

To address this challenge, we proposed the Co-MLOps platform, which is possible because of our wide network of partners based on open-source principles.

Overview of the Co-MLOps platform

The concept is essentially to share the effort involved in data collection. By enabling partners to share and accumulate data on the platform, it significantly enhances the scale, variety, and geographic coverage of the data. Partner companies gain access to large-scale datasets collected in locations worldwide, enabling AI development that was previously hindered by data scarcity. In addition, the platform will provide reference AI models and functions to efficiently improve the models, allowing companies to develop their own solutions.

In the current market, accessing the inner workings and data of widely-used solutions is challenging for various reasons. Many companies are seeking alternatives to such black-box products, which prompted TIER IV to come up with a solution.

David: The Co-MLOps concept was created from Edge.Auto. We started talking about the need for data after the development of new sensor fusion. We recognized that to build the perception systems that we need for autonomous driving, the framework for data collection and AI model creation would greatly benefit from a collaborative effort with our partners, in line with TIER IV's development philosophy. Co-MLOps proposes a platform that facilitates this collaboration.

With all contributing partners having access to the same framework and data pool, it is up to each participant to build their own competitive products, which is similar to the concept of open-source software. TIER IV will provide support to enable AI development that meets the specific requirements of users.

Kazunari: Partners contribute by using their vehicles to collect data. This includes experimental vehicles from automotive manufacturers and Tier 1 suppliers, all fitted with our sensors. And of course, TIER IV also provides data. Think of it as something like a Genki Dama from the Dragon Ball manga.

David: Part of this is unifying the data collection, both from a hardware standpoint but also the DevOps platform. As all collaborating partners use our proposed sensor system and processing pipeline, we can more easily ensure data consistency, quality and usefulness for perception application development.

Kazunari: A lot of players are struggling with data collection. However if 100 firms join hands to gather data, we each benefit from having much more at our disposal. That’s attractive for any company in the field. Companies who sign up to join Co-MLOps will gain access to that data pool. And it doesn’t stop with data. We’re also offering perception models. This is just the beginning. I'm excited about the prospect of many people joining as the data pool grows.

David: OEMs, Tier 1 and Tier 2 suppliers face challenges gearing up for Level 4 autonomous driving. With access to data and services under Co-MLOps, the transition becomes more accessible. It should also be possible to develop elemental technologies currently applied to Level 2 autonomous driving and advanced driver-assistance systems (ADAS) and apply them to Level 4 technology.

Kazunari:  In 2023, we conducted proof-of-concept tests in eight areas in Japan, Germany, Poland, Taiwan, Turkey, and the United States. Utilizing data collected around the world, we developed models for tasks like lane detection with cameras, confirming the ability to recognize lanes with high accuracy in different regions. This year, we're expanding into more regions and enhancing our data collection. We're utilizing multiple high-performance LiDARs and high-resolution cameras (5-8 megapixels) to achieve 360-degree coverage of the vehicle's surroundings, with an extended range. Partner companies interested in joining the platform are encouraged to reach out.

ー Are there any other similar projects that you’re aware of?

David: In terms of others that are doing this kind of collaborative data collection, maybe it’s happening at the university level, but probably not at a commercial level. Not that we're aware of, anyway.

Kazunari: It's like the so-called “Egg of Columbus.” It might sound simple when you first learn about it, but no one has tried to do it yet.

David: This is another instance where TIER IV’s position in the open-source community puts us in a unique position for this activity. We work with institutions from government and universities through to automotive OEMs and Tier 1 suppliers. 

We have always been open and eager to work with everybody. This extends to the sensor and software suppliers that we deal with too. This openness means that we have experience working collaboratively, and we are ready to provide solutions for anyone who wants to collect or use large-scale data for automotive and beyond.

ー What kind of challenges are you facing in the Co-MLOps project?

Kazunari: We are currently developing the key MLOps features that will be included on the platform, selecting regions for additional data collection based on discussions with partners, and working on establishing fair rules for all participating companies. Also, given the sensitive nature of data handling, we are working with the legal team to design processes that take into account personal information protection and regulations in different countries and regions.

David: For example, to comply with data protection laws, such as the EU General Data Protection Regulation (GDPR), we need to ensure data anonymization is properly implemented. We're developing methods for large-scale data sharing and collection, so it is something that we definitely have to address.

Kazunari: We are collaborating with the Amazon Web Services (AWS) team on the design of the platform. They have extensive experience in supporting a diverse range of customer requirements. We’ve received valuable advice on the design, development, and operation of cloud services, including security aspects.

We’ve got our hands busy with Co-MLOps at the moment. So we hope this blog post piques the interest of talented individuals out there. If you’d like to get involved, join us!


・・・

TIER IV is always on the lookout for passionate individuals to join our journey. If you share our vision of making autonomous driving accessible to all, get in touch.

We’re currently hiring for the following positions:

Future Solution team


Edge.Auto team


Administration team

Visit our careers page to view all job openings.


If you’re uncertain about which roles align best with your experience, or if the current job openings don’t quite match your preferences, register your interest here. We’ll get in touch if a role that matches your experience becomes available, and schedule an informal interview.

・・・


Media contact

pr@tier4.jp

Business inquiries

sales@tier4.jp

Social Media

X (Japan/Global) | LinkedIn | Facebook | Instagram | YouTube


More

Previous
Previous

A simulation engineer’s journey from the race track to virtual roads

Next
Next

開発をリードするエンジニアが語る!新リファレンスプラットフォームEdge.AutoとCo-MLOpsプロジェクト