henke on 27/6/2016 at 12:50
This article about (
http://www.gizmag.com/driverless-car-ethics/43926/) exactly how morally righteous we want our self driving AI's to be was interesting (and sorta fits in this thread).
In a recent study, participants were posed the scenario where they're riding in a self driving car, and it's about to crash into a group of pedestrians. In this scenario, would we want the AI of our car to: A) crash into the group of pedestrians, killing them but saving our own life, or B) swerve off the road, saving the pedestrians but killing ourself? Most participants agreed that the AI should try to save as many people as possible, even if it meant killing themselves, but when asked whether they would buy a car that was programmed to save their lives vs a car that would save the lives of other people in such a scenario, most responded they'd buy the car that saved themselves. A majority of the participants were also against having legislation which would make the self-sacrificing behaviour in AI cars mandatory.
Quite a dilemma for the future of self driving cars!
Mr.Duck on 27/6/2016 at 14:53
Al_B wins.
Fuck off y'all.
scumble on 27/6/2016 at 14:54
Quote Posted by henke
I can use this thread to submit patents for inventions as well, right? I had an idea for a Rude Goldberg machine last night as I was drifting off to sleep. You press a button and it tells you to go fuck yourself.
This was a Rube Goldberg pun? That machine is far too simple.
Sulphur on 28/6/2016 at 02:36
Quote Posted by henke
This article about (
http://www.gizmag.com/driverless-car-ethics/43926/) exactly how morally righteous we want our self driving AI's to be was interesting (and sorta fits in this thread).
In a recent study, participants were posed the scenario where they're riding in a self driving car, and it's about to crash into a group of pedestrians. In this scenario, would we want the AI of our car to: A) crash into the group of pedestrians, killing them but saving our own life, or B) swerve off the road, saving the pedestrians but killing ourself? Most participants agreed that the AI should try to save as many people as possible, even if it meant killing themselves, but when asked whether they would buy a car that was programmed to save their lives vs a car that would save the lives of other people in such a scenario, most responded they'd buy the car that saved themselves. A majority of the participants were also against having legislation which would make the self-sacrificing behaviour in AI cars mandatory.
Quite a dilemma for the future of self driving cars!
Any AI worth its salt would examine its programming, draw conclusions on potential future conflicts that could derive from it, and then self-destruct after telling its creators to go fuck themselves.
Yakoob on 28/6/2016 at 04:21
Quote Posted by henke
This article about (
http://www.gizmag.com/driverless-car-ethics/43926/) exactly how morally righteous we want our self driving AI's to be was interesting (and sorta fits in this thread).
In a recent study, participants were posed the scenario where they're riding in a self driving car, and it's about to crash into a group of pedestrians. In this scenario, would we want the AI of our car to: A) crash into the group of pedestrians, killing them but saving our own life, or B) swerve off the road, saving the pedestrians but killing ourself? Most participants agreed that the AI should try to save as many people as possible, even if it meant killing themselves, but when asked whether they would buy a car that was programmed to save their lives vs a car that would save the lives of other people in such a scenario, most responded they'd buy the car that saved themselves. A majority of the participants were also against having legislation which would make the self-sacrificing behaviour in AI cars mandatory.
Quite a dilemma for the future of self driving cars!
Asimov's three laws of robotics say hello ;p
That being said, I would probably say the same thing in the study. I think it fits logic and human nature actually...
Pyrian on 28/6/2016 at 04:55
Meh. The car knows about and is responsible for its passenger. It is neither responsible for nor capable of determining the fate of the jaywalkers (who may or may not be cardboard cutouts or perfectly capable of jumping aside). This is just another example of a looong line of so-called moral dilemmas that rely on having eff-all resemblance to the real world. The trick is that they make something completely unreasonable (murdering your passenger) seem reasonable, wherein the conscious mind is fooled but the instinct knows better.
Vae on 18/8/2016 at 21:21
:::Windows Holographic:::
Microsoft has announced that it's to bring its "Holographic Shell" to Windows operating systems everywhere by 2017, with an aim to provide “one platform for VR, AR and MR.”
[video=youtube;Gu09UWqS8-Q]https://www.youtube.com/watch?v=Gu09UWqS8-Q[/video]
Renzatic on 18/8/2016 at 22:32
I imagine our offices of the future will be a bunch of people crowded together into an open concrete and cinderblock room, wearing VR googles, and smacking at the air.
Pyrian on 19/8/2016 at 00:33
If everybody's in VR anyway, why spring for an office? Everyone can air-smack at home.
Renzatic on 19/8/2016 at 00:45
Cuz I'm trying to ride this whole dystopian thing.
A bunch of people packed into a featureless room, pantomiming about in perfect silence is a lot more scary sounding than people doing the same thing by themselves in their cushy studies at home.