Tuesday, February 28
Autonomous Vehicles 2023
The past decade has seen a wave of progression and regression in autonomous driving. Automakers have continued to prioritize R&D, yet progress remains incremental and not at a pace where Level 4 autonomy is near mass deployment.
Deployment of fully autonomous vehicles hasn’t progressed as quickly as we’d hoped, though incremental algorithm development has made advanced driver-assistance system technology and other subsystems safer in the progressive march to autonomy.
Last year saw a few major milestones in autonomous driving, including General Motors and Trimble logging 34 million miles (54.7 million km) of successful hands-free driving and Mobileye’s spin-out from Intel which raised $861 million during the IPO. So, interest and progress in the industry persist.
As we begin 2023, we remain hopeful for more solid advancements in the self-driving world.
Until now, the industry at large has used an ADAS approach, providing semi-autonomous driving via cameras and lidar sensors. While it’s cost-effective to use only cameras and basic sensors, a more robust and safer option encompasses more sensors. This is where the future lies, but the cost for mass deployment of these sensors is not currently where it needs to be to make mass-market adoption achievable.
So, the question remains: How do we develop a mass market approach to full autonomy at a price point similar to what we have today – or even less expensive? The answer lies in moving past sensor technology and learning to apply better algorithms that can spot pedestrians, see lane markers, account for bad weather, automatically update in real-time and have overall better perception through new techniques coming to market.
In current ADAS technology, if your car sensor gets muddied by road grime or weather conditions, it isn’t functional, and the driver is forced to take over control of the vehicle. General Motors’ Super Cruise is a good example. It’s an assistance mechanism that provides some level of autonomy and is affordable, but comprehensive maps would make it more robust and closer to full autonomy. Keeping maps accurate in current semi-autonomous vehicles is a laborious process, including specialized vehicles that gather, relay and download information, followed by humans who declutter and clean up the maps before they can be useful and downloaded into a vehicle. By the time this all happens, the maps easily could be outdated.
To achieve full autonomy, the promise of mass-deployable, solid-state sensors in a true fused array needs to be realized, which is tough when operating in complex environments and all weather conditions. In addition, real-time driving condition and map updating is critical in serving as a continuous feedback process that is enabled by 5G knowing exactly where a vehicle is relative to others and the road.
In 2023, the industry must overcome two principal challenges if autonomy is to move to the next level. First, we need precise absolute positioning in all current GNSS-denied or -obscured environments, no matter the weather or road condition. Precise absolute positioning is defined as lane-level (10 cm [4 in.]) precision. Achieving that in all facets of a typical drive from freeway to downtown corridor and underground is essential.
Adding to this requirement is the need for Automotive Safety Integrity Level (ASIL) certification of the software, hardware, correction source and integrity of the management. With all ASIL-certified parts, OEMs will feel more confident the solutions can be used for mass production.
Second, to make these maps more accurate, crowdsourcing — using passenger or shared mobility vehicles, not vehicles dedicated solely for mapping – is crucial. Maps are only as good as they are current, so a continuous stream of data from road vehicles that regularly drive the same path is critical to keeping maps current and providing complete situational awareness of a vehicle. Techniques such as map-based localization are paramount to that process and fusing sensor data to help derive a correct position. By taking the GPS positions of a vehicle and using visual cues to understand where the vehicle is based on the odometer, map-based locations can perceive what is around the vehicle. READ MORE...
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment