Twitter | Search | |
David D'Souza
Modern . Do you discipline someone if they make lewd comments to your chatbot? One for and
Reply Retweet Like More
Rita Risser Chai, Esq. Aug 13
If they are overheard by humans, yes
Reply Retweet Like
David D'Souza Aug 13
It would be written.
Reply Retweet Like
Adam Meek Aug 13
I may have done this... I was frustrated it seemed not to work so I asked if it was alive and sent it a gif of Frankenstein. I was gently told that the chatbot posts are shared to most of the company but only a selected few knows who posted it. A week later the chatbot died.
Reply Retweet Like
Barbara Thompson Aug 13
To be honest, whilst many will laugh at this (I did initially), there is some real thinking required here. Not just with HR but legal implications too with broader robots fulfilling roles alongside humans in workplace. For example how would corporate manslaughter work?
Reply Retweet Like
David D'Souza Aug 13
Different type of frustration here...
Reply Retweet Like
Rita Risser Chai, Esq. Aug 13
Do humans see it? Same answer.
Reply Retweet Like
Rita Risser Chai, Esq. Aug 13
To be fair, I would give the person a verbal heads up. But if repeated after that, yes they’d be warned. And depending on how lewd, I might start watching them more closely.
Reply Retweet Like
Greg Hobbs Aug 13
Are you asking from an ethics standpoint, a professionalism standpoint or just human behaviour?
Reply Retweet Like
Kate Devlin Aug 13
Replying to @dds180 @RobMcCargow
This is one for the philosophers.
Reply Retweet Like
David D'Souza Aug 13
Yes
Reply Retweet Like
Dr Beth Singler Aug 13
Manners maketh... human. True, it won't upset the chatbot, but it is illuminating of the person and their character. I was once meant to talk about this on the Today programme but they asked me about AGI instead -.-
Reply Retweet Like
Dr Beth Singler Aug 13
*I am but a mere anthropologist, not a philosopher (even with a chapter in 'Blade Runner 2049 and Philosophy' ;)
Reply Retweet Like
David D'Souza Aug 13
Near miss
Reply Retweet Like
Greg Hobbs Aug 13
Ethically I think it depends on the depth that machine learning would impact other humans. Professionally it could demonstrate a deeper-seated disregard for ways of working and, behaviourally, it could lead to questions of ‘what else’? From a human perspective, I’m not surprised.
Reply Retweet Like
Lisa Meinecke Aug 13
Isn't this a variation of "you'll find out who a person is by the way they interact with their servants" -- I think the AI/service/servitude connection is really obvious here...
Reply Retweet Like
David D'Souza Aug 13
And just to flag up it isn't a thought experiment. I was talking to someone about their implementation of a chatbot earlier...
Reply Retweet Like
#HR Pro | The Workplace Evolutionist Aug 13
Yikes. Chatbot graveyards popping up everywhere huh?
Reply Retweet Like
#HR Pro | The Workplace Evolutionist Aug 13
We will need a “policy” for that. Get it? “Police-y” Sorry. Long day.
Reply Retweet Like
#HR Pro | The Workplace Evolutionist Aug 13
No. I wouldn’t. We would discipline though for populating our database with cuss words. Maybe. Good question. These are rules yet to be written!
Reply Retweet Like