I'll try to address your points without being too wordy (can't make any guarantees)...
Quote:
Originally Posted by Nouvellecosse
I think there are things to work out but not as many as naysayers make it seem.
|
Just to make it clear, I don't consider myself to be a naysayer, I was simply trying to convey the idea that the technology isn't ready yet for mass consumption, and given what I've seen of the current level of technology (all these new "driver assist" features on cars are basically testing autonomous technology in small bites), there is a way to go yet.
Quote:
When it comes to new things, people tend to set higher standards of perfection in order to be considered feasible, when reality they just have to reach the point of being measurably better than the status quo in the net aggregate. That means there can still be problems and drawbacks that we accept if we accept problems of equal or greater magnitude currently.
|
I think you're ignoring the human element here. This is just my thinking, but I believe it's easier for people to understand when somebody makes a mistake while driving and causes a collision. After all, "we're only human" and most people probably have made similar mistakes while driving but had gotten away with it. Massive errors in judgement, like driving while intoxicated, excessive speed causing an accident, etc., are likewise judged harshly and people are penalized for their errors. We as a society feel that they are being punished for their deeds and tend to forgive and forget over the long run.
However, I think "we" have a different standard when a "computer" makes a mistake that results in injury or death. I believe such cases would be judged much more harshly, and especially in litigious USA, it could result in potentially crippling lawsuits to the company(ies) responsible for the vehicle/software/system.
So while, yes, statistically the autonomous systems could be better than humans (especially in the errors of judgement noted above),
I think that they will be held to a higher standard. But I'm an old(er) guy - my understanding is that typically the generation coming up is much more accepting of "tech" having an integral part in their personal lives and may be more accepting of injuries or deaths happening as a result of 'tech failure', if the statistics say it's better overall.
Quote:
For example, children and animals running out into the road unexpectedly are problems faced by human drivers already and probably not handled any better.
|
I think you misread the intention of mentioning children, animals, and pedestrians causing cars to stop. It was in response to your previous statement:
"Ironically if all human drivers were to be off the road by tomorrow, automated operation would suddenly become far more feasible. If all the vehicles on the road were able to communicate as part of some master network it would solve so many problems. You wouldn't need traffic lights or stop signs or minimum following distances since all the traffic would be controlled by the same system."
Quote:
And cyclists not feeling safe and wanting infrastructure to separate them from traffic is already an issue. As a cyclist myself, having all the cars behave in a predictable, neutral manner (not excessively polite or aggressive), not pass you only to turn right and cut you off, or not be paying attention and not see you sounds like an improvement. Yes there would be times when the automated cars wouldn't see you but they would be less prone to distraction, emotion, or fatigue than drivers currently.
|
A couple of points:
- Having all cars act in a predictable manner can also have the effect of human cyclists being lulled into a false sense of security, until something goes wrong. But, then, statistically I'm sure it would be better, though we won't have any real-world data on that until the system is actually in use. Any problems encountered in early versions would be countermeasured in later versions, and if the problem is extreme enough would result in recalls that would retrofit older/existing models with the newer hardware/software necessary.
- I think there will have to be some sophistication built into the software/hardware to be able to be aware of and predict the actions of cyclists to prevent the right turn in front of them scenario. Presumably that will be considered - if not at first then at least (as industry tends to do) after a number of fatalities occur.
- The 'not seeing you' scenario was specifically referring to the Ottawa situation, such that if there was a camera/radar malfunction the following car might see the car in front of them but not the cyclist. Again, statistics, and eventual countermeasures. The product will be improved as problems arise.
Quote:
I do agree that there are legislative issues that need to catch up but that's the case with all new technologies. On one hand corporations might resist liability more than human drivers, but there would also be more reliable camera and sensor data on board making determining the cause of accidents easier. And i don't see a reason why there couldn't simply be no-fault insurance covering everything. There are some jurisdictions that are already "no fault" as is. And pedestrians would soon learn that automated cars are just "better" drivers rather than infallible, so as soon as someone was killed taking careless risks or prosecuted with the help of the onboard cameras, precedence would be set.
|
I agree. If there is enough political will to bring this technology into the mainstream, then it will happen. Legislation will be created to make it workable.
Insurance doesn't bring back loved ones in the event of a fatality, but society tends to view a monetary payment as suitable compensation.
Oh well, so much for not being wordy...