Waymo recently released via Youtube a lecture on “Waymo Driver,” the company’s fifth-generation robocar platform. Presented by YooJung Ahn, head of design at Waymo, the video clip shares the new platform’s basic design ideas.
Ultimately, the trick that caught many industry observers’ eyes about the latest robocar was Waymo’s liberal use of various sensors, including improved lidars, radars and 29 cameras. The lecture has been described as “an insight from the inside.”
Following are highlights noted by Waymo:
- More than 20 million miles (32 million kilometers) driven; more than 10 billion miles (16 billion kilometers) in the simulator; tested in more than 25 cities in the USA
- Sensors have a 360-degree view with visibility up to 350 meters
- Four uses for Waymo Driver: ride-hailing (what used to be referred to as ride-sharing), trucking, deliveries, private vehicles
- Four design principles: simple, honest, approachable, delightful
- 29 cameras with overlapping field of view; equipped with cleaning system and heating
- Sensors also tested in Death Valley, at temperatures up to 50 degrees Celsius
- Sensors designed to meet stringent automotive requirements
- A platform developed for mass production
- Adaptable to different vehicle platforms
- LiDAR dome with an LED display to communicate with waiting passengers
However, in keeping with Waymo’s chronic secretiveness, it’s no surprise that Ahn’s lecture actually said very little.
Ahn was vague, if not silent, for example, about the specs of “lidar, cameras and radars and compute,” all of which are custom-designed Waymo products deployed in the fifth-generation Waymo Driver.
Consider, for example, the Waymo Driver’s compute.
No industry analysts at this point know the computing architecture deployed in this Waymo Driver. Asked about the architecture, Ian Riches, director for the automotive electronics service at Strategy Analytics, told us, “We have no idea.”
Some observers speculate that Waymo has amalgamated CPUs (i.e. Intel’s Xeon), GPUs (i.e. Nvidia) or FPGAs, in addition to Google’s own TensorFlow-based accelerators. It’s possible that Waymo designed its own highly optimized custom silicon to offload certain workloads, Rich noted.
Ahn in her presentation disclosed only two things about Waymo’s compute engine: 1) It’s custom designed, and 2) while offering “even more powerful” compute power, the compute engine’s volume is now “successfully reduced,” providing more trunk space. (Translation: On previous platforms, Waymo’s robocar depended on a server-like computer unit that occupied pretty much the whole truck.)
How many sensors can you stuff into a robocar?
Ultimately, the trick that caught many industry observers’ eyes about the latest robocar was Waymo’s liberal use of various sensors, including improved lidars, radars and 29 cameras. As Yole Développement pointed out in its latest report entitled “Sensors for Robotic Mobility 2020,” “Robotic vehicles do not focus on the cost and long-term reliability issues that are the main concern for other automobiles. All that matters are the immediate availability, performance, and supportability of their sensor suite.”
If cost is no object, then what is at stake here? The issue is that limitations in downstream computing power could ultimately hamper robocar system designers’ ability to add more sensors to the vehicle, in Yole’s opinion.
Put simply, How many more cameras can a robataxi designer throw at its system? “The robotic sensor dataflow is utterly limited by downstream computing power,” wrote Pierre Cambou, principal analyst at Yole. In a recent interview with EE Times, he cautioned: “We might come to a point that [AV] system designers will no longer be able to add more cameras.”
Cambou studied the relationship between the increased dataflow in an AV and the computing power required to process the added data. “My conclusion is that sensor dataflow increases with the square root of computing power increase.”
Noting that dataflow demands a logarithmic increase in computing power, Cambou described his conclusion “a terrible discovery.” His judgment is that the budget for dataflow in an AV will be already severely limited.
Unlike many processor engineers in Silicon Valley, who prefer to deny the looming expiration of Moore’s Law, Cambou sees the AV compute dilemma as a matter of sensor dataflow. He anticipates that computing power in autonomous vehicles will ultimately explode. The consequence, in Cambou’s opinion, is that the AV industry needs “a new paradigm.”
One way to avoid the pending “dataflow vs. compute power” impasse is to start looking for improved data quality, said Cambou.
The AV industry, he says, needs a diversity of better performing sensors. In addition to deploying better lidars and radars, items like thermal cameras (Waymo doesn’t have one yet, but analysts say they will) are necessary “to improve the quality of data within a data budget,” said Cambou.
He mentioned that use of “event-based cameras” such as those by Prophesee holds promise to answer the limited compute power. “Neuromorphic is bio-inspired ‘sensing’ and ‘computing.’ By combining both, this could present, although not a guarantee, a third way of computing,” he added.
Waymo’s home-grown sensors
One might regard 29 cameras as a bit excessive. But Waymo has taken several measures to combine these custom-built sensors to enable them to collectively improve data quality.
- Waymo’s lidars
Ahn explained Waymo’s new family of lidars comes with “an even higher resolution across a wider range.” She noted, “As one of the Waymo Driver’s most powerful sensors, our lidar paints a picture of its surroundings in great detail. It sees the world in 3D and can see in the dark of night without any illumination.”
About Waymo’s 360 Lidar, mounted on the vehicle’s dome, she said, “It can see up to 300 meters away, and provides a bird’s eye view of the cars, cyclists, and pedestrians surrounding the vehicle. At the same time, our latest perimeter lidars placed at four points around the vehicle offer unparalleled coverage with a wide field of view to detect objects close to the vehicle.”
- Waymo’s vision system
Ahn said the system provides the Waymo Driver “with higher resolution images and greater perspective.” She explained that the cameras’ overlapping fields of virtually surround the vehicle. She also noted that the cameras are assembled “with cleaning systems and heaters for the best performance in any weather condition.”
Ahn said that the robocar’s new long-range cameras and 360 vision system “see much farther than before, allowing us to identify important details, like stop signs greater than 500 meters away. In addition, our new perimeter vision system works in conjunction with our perimeter lidars to give the Waymo Driver another perspective of objects close to the vehicle.”
Waymo Driver’s peripheral vision system reduces blind spots caused by parked cars or large vehicles. “These new cameras enable us to peek around objects, such as a truck driving in front of us, to see what might be there, and to make a decision about what to do,” said Ahn. “Together, these various types of cameras allow us to make decisions earlier and faster with even more information than we’ve ever had before.”
- Waymo’s radars
Ahn explained that Waymo’s new high-resolution imaging radar has six vantage points around the vehicle. It “tracks both static and moving objects, can see small objects at greater distances, and distinguish between closely spaced objects.”
She stressed, “Radar compliments lidar and cameras with its unique capabilities in weather conditions such as rain, fog, and snow.”
The upshot of having a sensor suite with lidar, cameras, radar, and an AI compute platform is that, when you put them all together, “these sensors give our vehicles a 360-degree view of the world, over 300 meters away,” Ahn explained.
But most likely, software will determine the success of the Waymo Driver. Ahn, unfortunately, covered this issue with generalities:
“On the software side, the brain of our self-driving vehicles, we take all of the information our sensors collect to answer four key questions: Where am I? What’s around me? What will happen next? and what should I do? Together, our hardware and software work in concert to paint the complete picture of the world around the car and enable us to navigate roads safely.”
Editor’s note: This article originally appeared on EPSNews sister publication EETimes