In my earlier post https://muckypaws.com/2021/03/05/theres-a-starman/ I waxed lyrical about the new Robot Delivery service operating in my area, and I still stand by that.
As with all new technology, there’s only a limit to the amount of testing that can be achieved in a lab or user acceptance testing before unleashing your creation on the unpredictable Operational world where exception conditions, bad or poorly formatted data or use case scenarios that you didn’t predict in early testing cycles occur. Anyone working in IT will know how dirty operational data can be.
This is more true of Autonomous vehicles as they start trialling in the real world, and it’s expected there will be early issues and perhaps robot collateral during these learn and trial phase of Operational Acceptance Testing.
I made my second order for robot delivery, and tracked the little fella’s progress as he trundled around the streets, crossing roads, until I received a message asking me to greet him at the door he’s a minute away.
I stepped outside, and could see the little fella on a pedestrian island, waiting patiently for the traffic and people to subside before making the final journey across the road. It was unusually busy given lockdown, however just as a car was approaching, the little fella jumped out in front of the vehicle (Maybe he was bored of delivering?) causing the vehicle to perform an emergency stop.
The driver looked panicked, the little robot acted confused, before trundling back to the island and awaiting further instruction.
Eventually he made it to my door, before collecting my shopping and sending him back on his way with a swipe.
I reported the incident to Starship giving as much detail as possible, and received the usual stock answer we’re looking into it. Normally what this means is some developers will analyse the data, and maybe work out a patch and roll the fix out, the end-user would be none-the-wiser.
Imagine my surprise when Starship actually replied and updated me on what the issue was. The abridged version is the robot had waited too long on the pedestrian island and signalled back to base asking for human help to navigate this final hurdle. It was in fact human error that caused the emergency stop. Maybe complacency, trigger happiness or just not paying full attention to the scenario was the cause (That I’ll never know and don’t need to know).
Thank you so much for reporting this!
We have finished investigating the robots behavior.
The first part of the crossing went quite smoothly. However, the second part was more difficult.
Our robots are quite good at detecting cars. Due to high traffic the robot chose not to cross the road autonomously, but asked for a human operator help.
When the operator gave the crossing command the car was already approaching and slowing down.
We understand that this was a dangerous situation and have reviewed the case with the operator in question.
The incident has also been forwarded to the robot developers to make sure this would not happen again.
Thank you for making the world a safer place!
What impressed me was the great level of customer service received after notification of the issue by actually taking the time to follow up post-investigation. This gives me a huge amount of confidence on the due diligence, effort and care this company are investing into getting this technology right.
At the end of the day, in this scenario, it was human, not autonomous technology that caused the failure, which highlights the advancement of autonomous technology and the rigour of safety standards that are met. This could make human judgement easily redundant in the future.
Remember folks, Pray to our Robot Overlords, for one day they’ll take over the earth…