Twitter | Search | |
Rachel Coldicutt
My take on Google’s AI Principles. Why they - and every tech ethics statement - needs to state who they are for and who they will harm.
This is a very short post, because the alternative would be to write a very, very long one and it’s Friday night — and well, I’m not Peter…
Medium Medium @Medium
Reply Retweet Like More
David Reed Jun 9
Replying to @rachelcoldicutt
This is so good! I waved between thinking Google et al’s “ethics” work is highly naive or highly disingenuous. If it doesn’t force you into making very costly decisions, it’s not ethics, but a risk register.
Reply Retweet Like
Gary Todd 💡 Jun 9
Replying to @rachelcoldicutt
Excellent article. You've explained in a single blog why our strapline contains the words 'better for all'. Very encouraging to read this explanation and understanding.
Reply Retweet Like
Anne Currie Jun 9
Replying to @rachelcoldicutt
For algorithms, traditionally the errors (or losers in human terms) are usually edge cases, where the parameters vary most significantly from the mainline. Having said that, we also just get timing related issues, which are pure bad luck (i.e. anyone could be affected)
Reply Retweet Like
Anne Currie Jun 9
Replying to @rachelcoldicutt
The more scale we achieve, the more of both we see. All software has bugs. That's why having algorithms make important, irrevocable decisions (like life or death) is a bad thing
Reply Retweet Like
Cassie Robinson. Jun 9
Replying to @rachelcoldicutt
Ummm .. do we need to do that with our Responsible Tech principles then? 🤔
Reply Retweet Like
Rachel Coldicutt Jun 9
Replying to @CassieRobinson
Ours are our there to prompt people to do just that: consider the social impact, consequences, value flows of their products and services. We can’t dictate who others chose to benefit, but we can ask them to be clearer about their intentions.
Reply Retweet Like