Australian Catholic University says its new AI system proves driverless cars 'within reach’
Cool research, weird flex.
A team of researchers at Australian Catholic University, with support from the National Key R&D Program of China, have unveiled a vision-based driverless vehicle technology framework that they claim can replace LiDAR.
Where have we heard that before? Oh, yes, that’s right. Ditching LiDAR for vision was the basis for Tesla’s “Full Self Driving” and “Autopilot” features, which are, ironically, still not capable of full self-driving or “autopiloting” a vehicle safely.
Have the researchers at ACU succeeded where the 10th most valuable company in the world (by market cap) hasn’t? Let’s take a look at the paper to find out.
Computer vision
One of the big problems with AI models is that they don’t live in the real world. They’re ones and zeroes inside of black boxes. As humans, we shouldn’t be too judgy, though. We might be nothing more than cells and processes firing around inside of a gooey grey matter box.
Either way, without a means by which to experience or perceive our environment in real time, neither one of us is very good at driving a car.
Around 2017/2018, computer vision systems — AI models trained to interpret real-time camera feeds — began demonstrating great progress towards bridging the gap between human-level perception (for the purposes of driving) and the current level of autonomy achievable.
This progress prompted many direct stakeholders in the driverless car industry to make bombastic declarations surrounding the immediate future of transportation.
Elon Musk, for example, boldly claimed there would be one million fully autonomous “robotaxis” in operation by 2020. It’s 2025 now, and the total number of fully autonomous cars or “robotaxis” in operation is exactly zero.
To be clear: We think modern driver assistance features are great. The use of AI technologies to make the world safer and better for humans should be the ultimate goal of everyone working in the industry.
But, it’s important to make sure we’re not overselling these technologies. People have been harmed because FSD or Autopilot malfunctioned or they confused Tesla’s “Full Self Driving” feature with the ability for an AI model to exhibit “full self-driving” capabilities. An easy mistake to make given the misleading nature of the product’s name.
But, one way we can be certain that Tesla hasn’t developed fully autonomous vehicles, even if you don’t read the mountains of research explaining why state-of-the-art AI-powered driver assistance features don’t equate to full self-driving capabilities, is that Musk isn’t using his global platform to tell everyone it has.
Nobody wants Tesla to produce a legitimate, fully autonomous vehicle more than Elon Musk.
Australian Catholic University’s paper
One of the reasons why vision hasn’t worked as a full-spectrum solution for autonomous driving involves the emergence of what’s called “edge cases.” Essentially, no matter how hard you train an AI model, it may be an intractable challenge to anticipate every situation within the realm of possibility.
For this reason, many researchers are taking a multifaceted approach. According to their paper, for example, the team at ACU believes its “human-centric autonomous intelligent systems” (HCAISs) combining advanced computer vision techniques and the ability to integrate with infrastructure and peer model networks present a path forward for the field.
Per their paper, these HCAISs “equipped with advanced perception, decision-making, and control technologies, offer vehicles comprehensive environmental perception and real time responsiveness, significantly bolstering their travel safety.”
Sounds good so far. We certainly agree that such a system would ostensibly be safer. But then things get a bit weird. Here’s an actual sentence from the paper’s abstract:
“Furthermore, the decision-making processes of human-centric autonomous driving systems are rational, adhere to established values, and consistently prioritize the will of humans, with system decisions serving a supporting role.”
We don’t believe that the system described throughout the paper is rational or that it can consistently prioritize the will of humans anymore than we believe that “Gemini 2.0 Thinking” demonstrates the ability to think or that the ARC-AGI-2 benchmark actually measures direct progress toward AGI.
Read more: Google, OpenAI, Artificial Intelligence benchmarks, and AGI — Center for AGI Investigations
You may be tempted to think we’re making a mountain out of a molehill. But quotes printed in the Australian Catholic University’s news blog might give you pause as well.
“This isn't just a concept,” says ACU Associate Professor Walayat Hussain, an author on the paper, “MonoFG has been tested on real-world datasets, proving that affordable, AI-driven self-driving technology is within reach.”
We disagree with that. The paper is interesting, and the system it describes has the potential to move the needle when it comes to benchmarking. We’re particularly impressed by the team’s potential contributions in the areas of 3D object identification, segmentation, and depth-mapping. But it’s unclear how this brings “affordable, AI-driven self-driving technology” within reach. Especially relative to Tesla’s efforts.
“In the future,” reads the paper, “to advance HCAISs in smart cities and enhance their intelligent information processing capabilities, we will explore human-centred autonomous driving systems aimed at achieving reliable interactions between humans and vehicle systems.”
This tells us the research direction is clearly aimed at the development of an “internet for cars” that connects machines, humans, and infrastructure via sensing and reporting.
While we don’t normally defend Tesla’s work, because it should be called “Driver Assistance” instead of “Full Self Driving,” the millions (or maybe billions by this point) of miles their cars have driven in simulations and on actual roads is a pretty good indication of where the state-of-the-art lies.
As ACU’s research points out, it costs about $70,000 to buy a LiDAR unit. That’s a prohibitive cost for the manufacture of a consumer vehicle that can’t drive itself.
However, based on the limited research available, it’s hard to argue that consumer economics is what’s holding the entire world back from building fully autonomous cars.
If Tesla could slap a LiDAR unit inside a Model 3 and declare that it’d reached “self driving” capabilities to any degree that insurance companies, governments, and peer review could support, they could ostensibly sell it at whatever price they wanted.
A Model 3 costs around $70K with full amenities today. If it could operate safely without human intervention, as a robotaxi, buying one for $140K would be a bargain.
We don’t believe driverless cars are now “within reach” thanks to the research from ACU. We think their work is strong and has a definite potential to contribute to the field, but without peer review and practical demonstration, it seems hyperbolic to view this as a landmark.
The way we see it, fully autonomous passenger cars are almost exactly as close today as they were in 2019 when Elon Musk said there’d be a million “robotaxis” on the roads by 2020.
Art by Nicole Greene