"Nothing new arrives fully formed."
I agree, but you can't go around killing people on the assumption that something needs further development.
I have to say, I think the people that are developing artificial intelligence are trying to take it too far too fast.
Their bosses know that "first to the market" will be a massive advantage and sometimes, that leads to caution going out of the window.
From this example, it's clearly not fail safe and it should be.
, it's clearly not fail safe and it should be.
Nothing ever is and how many are killed by human operated vehicles.
Unfortunately AI cannot account for humans. Humans do stupidly unpredictable things and there are cues we can pick up on that machines won't.
Case in point the other day while in traffic a driver (and I use that term loosely) who was in a BMW X5. We were on a three lane road and he was weaving in and out of traffic.
I saw him in my mirror almost collide with the car behind as he switched lanes without indicating or anything and almost collided with that car. He weaved again and floored it then swerved in front of me. Thankfully I saw the shenanigans behind me and was going below the speed limit. When he pulled in front of me he almost tore off the front of my vehicle. Had I not hit the brake and the person behind not been a reasonable distance there would have been an accident.
He then proceeded to cut off a lorry, thankfully that wasn't going fast either as he went side to side in his lane as if he couldn't make his mind up where he was going.
An autonomous vehicle simply can't spot this and predict impending stupidity.
While not sure of the events that have caused this accident. I'm sure more will surface. So far all these accidents have been caused by human error. Like the lorry driver who backed into one in Vegas expecting that if he got too close the motorist behind would blow their horn.
What was the 'vehicle operator' doing when this happened?
There was a vehicle operator in the car, so on the face of it he/she was responsible. That's an opinion based on an assumption, rather than fact and it would be a good idea not to make the kind of assertive allegation that I have just deleted until a lot more information is available.
Self-driving cars are a fact, the technology is not going to go away. Like all new technologies, it will be refined over time and there's a legitimate argument which says that the refinement should not take place on public roads.
The other side of the argument says that it is only possible to improve the technology by operating self-driving cars in a real-life environment, and I can see the logic of it.
When rail and air travel was in its early days, hundreds of people were killed. Lots of people died when early cars first took to the roads, and many thousands of people are still being killed in road accidents - caused not by cars, but by the people who drive them. Self driving cars are, in theory, much safer but they depend entirely on the technologies which see what's coming. The interaction between the vehicle's sensors and its steering and braking is critical, and that's where there is obviously a lot of work still to be done.
One thing that bothers me is the claim that driverless cars can be hacked, this has been demonstrated a couple of time. What will happen when they try out driverless trucks that are in a convoy, hack one you hack them all, could cause a major pileup on a motorway.
Yes, it's an obvious worry, and there's no easy answer. The people who handle software security for these vehicles are going to have to come up with some serious security measures, or forget about wireless technologies altogether.
you can't go around killing people on the assumption that something needs further development.
That's what happened when the first pedestrian was killed by horseless carriage in 1906.
Wait twenty years and compare self drive cars with human drive cars in terms of inattention, drink driving, use of phones and so on.
This thread is now locked and can not be replied to.