|
@CurlOfGradient | |||||
|
Any argument against self-driving cars that brings up the trolley problem is also an argument against humans driving cars.
|
||||||
|
||||||
|
Victor Cromwell
@Honk_City
|
15. sij |
|
Humans can be held accountable for their choices behind the wheel, robots cannot
|
||
|
|
||
|
Corn Woman 🌽
@WomanCorn
|
15. sij |
|
"Held accountable" means punished.
Would you be okay with AI drivers if the AI could suffer, and therefore be punished?
|
||
|
|
||
|
Palomar
@Palomar_qfwfq
|
16. sij |
|
I think the issue is whether you are programming/training the ‘AI’ (it’s not even what I think of as a real AI with any holistic aspect) to make an ‘informed’ choice in that situation. N/A today in the current kludgy state of how the technology interprets the environment...
|
||
|
|
||
|
Palomar
@Palomar_qfwfq
|
16. sij |
|
There are still reprehensible programming choices, like deciding not to categorize anything in the road as human if no crosswalk is seen (to cut down on false positives); that (plus inattention of the minder) was how the woman walking a bicycle in AZ was hit.
|
||
|
|
||
|
young thug esoterica
@dumbobutg
|
15. sij |
|
I think most arguments I’ve heard are about who assumes liability: ‘driver’ or manufacturer? Human-caused traffic accidents have a fairly obvious answer. If self-driven vehicles must make such decisions the moral calculus is the same as for people, obviously—
|
||
|
|
||
|
young thug esoterica
@dumbobutg
|
15. sij |
|
but then people usually have accidents because they’re shit drivers and react slower than were told these vehicles will (eventually). The moral question isn’t very potent unless there is in fact a decision to be made.
|
||
|
|
||