De grootste kennisbank van het HBO

Inspiratie op jouw vakgebied

Vrij toegankelijk

Terug naar zoekresultatenDeel deze publicatie

Robots and moral obligations

Manuscript version

Open access

Rechten:Alle rechten voorbehouden

Robots and moral obligations

Manuscript version

Open access

Rechten:Alle rechten voorbehouden

Samenvatting

From the article: Using Roger Crisp’s arguments for well-being as the ultimate source of moral reasoning, this paper argues that there are no ultimate, non-derivative reasons to program robots with moral concepts such as moral obligation, morally wrong or morally right.
Although these moral concepts should not be used to program robots, they are not to be abandoned by humans since there are still reasons to keep using them, namely: as an assessment of the agent, to take a stand or to motivate and reinforce behaviour.
Because robots are completely rational agents they don’t need these additional motivations, they can suffice with a concept of what promotes well-being. How a robot knows which action promotes well-being to the greatest degree is still up for debate, but a combination of top-down and bottom-up approaches seem to be the best way.

The final publication is available at IOS Press through
http://dx.doi.org/10.3233/978-1-61499-708-5-184

Toon meer
OrganisatieHogeschool Utrecht
AfdelingKenniscentrum Leren en Innoveren
LectoraatBetekenisvol Digitaal Innoveren
Gepubliceerd inFrontiers in Artificial Intelligence and Applications, 2016 Vol. vol. 290, Uitgave: What Social Robots Can and Should Do, Pagina's: pp. 184-189
Jaar2016
TypeArtikel
TaalEngels

Op de HBO Kennisbank vind je publicaties van 26 hogescholen

De grootste kennisbank van het HBO

Inspiratie op jouw vakgebied

Vrij toegankelijk