It’s been a few days now since I’ve completed my OpenPilot Prius install. I’ve put more miles on the car (700) then I have the Focus it replaced in a month.
Between charging the car each night, and the 60-80MPG I achieve once I outpace the vehicle’s electric-only range, I’ve still only burnt a half tank of gasoline, a full tank was provided by the dealer when new.
The sound of motors whirring, the static electricity noise the car emits from the front under low speed to warn of the silent vehicle’s approach for pedestrians; it’s an interesting feeling plugging in a vehicle like one would a cell-phone each night.
OpenPilot is engineered for several models of vehicle, each with their own systems and idiosyncrasies. Toyota is known to have laggy actuators and possibly variable ratio steering. What does this translate to in the real world? OP slowly oscillates within the desired lane at interstate speed.
It’s a dance of the algorithm adjusting, overshooting, readjusting, repeating within a 1-2 degree steering angle range. It doesn’t sound like much, but inputs are greatly exaggerated at speed.
This had led to a series of passionate community members each working on their own solutions to the problem. The developer whom I’m becoming friends with maintains a fork of an older version with what is called “VSR”, or Variable Rate Steering. Basically, it performs a slope function on any desired angle under 1.2 degrees and adjusts the steering ratio for any small correction to “filter” out small, busy angle changes.
Any angles that are larger than 1.2 failover to stock behavior as part of the safety check function so the vehicle can rapidly correct as needed, or perform large maneuvers.
There’s another developer working on something called “Feed Forward” lateral control. From what I understand, the behavior of the lateral control is dependant upon the desired angle.
This seems to be a good solution that would provide more precise steering over the long run, but what I found most interesting is the rate of public testing, feedback, and re-testing after the code is modified. Such comments such as, “almost hit a motorcyclist” and “the steering rapidly fell apart” concern me.
At the end of the day, we are crazy feckers installing DIY level 2 autonomy in such a way that can only be described as reverse engineering. Yes, the technology is exciting, yes, it works under specific use cases, but I am holding my breath; waiting for the first real accident due to people being so open to testing untested code in the real world, on real roads with pedestrians and other vehicles.
I think that apart from basic safety considerations, driving autonomy needs to be consistent and predictable. Under no circumstance should driving dynamics slowly, or suddenly change over time or it’s really going to catch people with their pants down.
OP’s latest release has a little something called “Auto Tuning” which changes underlying vehicle dynamics. This makes sense in a way, much of the fine-tuning can be optimized; even between same vehicle models as loaded weight, tire pressures, alignment, etc can differ. But I do also think that it is a liability, especially when developers build further self-optimizing code on top of a changing base.
The 75MPH curve that John makes every day on his commute while sucking down some Dunkin’ Doughnuts could very well turn deadly due to his vehicle slipping out of his lane and into a neighboring semi.
A much better model, in my opinion, would be to auto-tune and send values back to Comma for review, optimize static parameters between releases. At the least, cause for additional care is more likely when knowing a new update has been released instead of changes occurring live while you are driving. This could also help developers write safer code as there is less to account for.
Keep an eye on the road, and a hand on the wheel partner.