WN#23: The Challenges with Ethics in AI and Misinformation Labeling
This week's exploring AI ethics, misinformation labeling, management tips, Amazon versus Shopify, personality tests, and more. I hope you like it. 🙏
Hi all! Here I am with issue number 23. This issue has been very complicated. It started with writer’s block, with a complicated topic that took me more time than usual, and ended deciding to postpone the newsletter one week.
I started the week before writing about the resignation of Timnit Gebru from Google Brain, but it ended being an opinion post without space for anything else. It had some interesting points about ethics and management, but it went against what I want this newsletter to be. I want to share exciting knowledge bites, not only one single topic.
So, I decided to move the writing to my blog, and today I am still including stuff related to AI Ethics, but also about Misinformation labeling, Management Tips, Amazon and Shopify, Personality tests, and many other exciting tweets. I hope you find anything interesting in this week’s selection.
As I always end this introduction, please, do not hesitate to add your comments or share your feedback. One of my goals is to learn from you. And if you like it, please share it. The more we are, the more we will share and have fun.
Hope to see you next week. Thanks for reading it, and stay safe!
🚨 Ethical Issues in Artificial Intelligence
Last week I spent a lot of time reading and writing about Timnit Gebru’s case, former AI Ethical team co-lead at Google Brain, which ended in my decision not to send the newsletter and to publish it in a blog post.
But I didn’t’ want to lose the opportunity to share with you a brief view of the Ethical challenges in AI, the importance of Ethics in AI, and the relevance that it has: AI has a potential influence to change the world, but the open questions are how and towards where.
The ethics of AI lies in the ethical quality of its prediction, the ethical quality of the end outcomes drawn out of that and the ethical quality of the impact it has on humans.
So the challenge of AI ethics is to avoid any harm caused by their outcome. Probably unintended damages, but when there’s a lack of awareness regarding these issues, there’s the real potential harm that can be caused.
A “harm” is caused when a prediction or end outcome negatively impacts an individual’s ability to establish their rightful personhood (harms of representation), leading to or independently impacting their ability to access resources (harms of allocation).
Even Unesco is asking for policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole. Biased algorithmic systems can lead to unfair outcomes, discrimination, and injustice. There’s a high risk of encapsulating human biases and blind spots.
There’s a fierce debate regarding the interference of ethics in AI research. Some researchers defend their contribution to science as an argument to avoid the censorship of the ethical points of view.
Returning to Google's case, the research by Timnit Gebru, the content of the paper gives some context and helps to understand why it was rejected by Google, which ended in the company firing her.
MIT Technology Review published an analysis about the content: We read the paper that forced Timnit Gebru out of Google. Here's what it says. But Azeem Azhar did great in summarizing MIT's review in his LinkedIn article The Essential Ethical Quandary of Industry, where he raised the issue of Google's biased ethical work.
Google says that the work that its Ethical AI team does is important, but it can’t be completed by an in-house team. Self-regulation in this area will not work. It didn’t work with oil companies and climate change. It won’t work with tech firms.
So, on top of that, there's a conflict of interests. Google is not interested in researching something that could lead them to problems. Google cannot allow them to work freely, independently. So, Google wants to keep an Ethical team without ethics?
Andrew Ng also raised this open question regarding research and companies' relation to establish balanced rules to set similar expectations.
There’s a lot of work to do and define the ethics in AI, the relationship of ethics in companies researching and developing with AI, and a profound review inside the academic research.
If you’re interested in knowing more about AI ethics, you could visit The Hitchhiker’s Guide to AI Ethics.
🏷️ The Backfire Effect and Misinformation Labeling
Did you ever hear about the backfire effect? The idea that “when a claim aligns with someone’s ideological beliefs, telling them that it’s wrong will actually make them believe it even more strongly”.
The Misconception: When your beliefs are challenged with facts, you alter your opinions and incorporate the new information into your thinking.
The Truth: When your deepest convictions are challenged by contradictory evidence, your beliefs get stronger.
That’s what Casy Newton shared in an article about recent academic research about why labeling misinformation is not working expectedly: Encounters with Visual Misinformation and Labels Across Platforms.
The study found that half of the participants were glad of seeing labels identifying misinformation, but the other half had a hostile attitude towards labeling.
While the research sample size is too small, the answers and reactions from the people showed doubt and distrust toward the credibility of labels. So, instead of helping on fighting misinformation, labels could be throwing gasoline on the fire of polarization.
✨ Suggestions to Become a Better Manager
Starting with the quote “People don’t quit a job, they quit a boss.”, the following article A Tactical Guide to Managing Up: 30 Tips from the Smartest People We Know offers a list of thirty tips to help you become the best manager possible for your reports.
In the post, you’ll find recommendations from startup and tech leaders and authors, divided into the following seven categories. Follow the category link that suits you the most:
⚔️ Does Amazon Have to Worry about Shopify?
It was several months ago when Ben Thompson published at the Stratechery about the Anti-Amazon Alliance, the Power of Platforms, and the evolution of Shopify’s business model as a platform. Shopify was doing its job very well.
As Shopify shared in their annual report on global e-commerce trends, the pandemic has changed the way we shop, some of the e-commerce barriers seem to be overcome, and young consumers will change the e-commerce landscape.
Regarding social commerce, Shopify made a great movement with a partnership with Tik-Tok for social commerce, and with Google building a shopping destination from Youtube, making a serious leap into social and live-streaming commerce.
The massive growth of Shopify led Amazon to think about launching a rival again. Amazon shut down its Webstore solution in 2015 and thought of partnering with Shopify, but now seems to be a real threat.
And it is not only because Toby Lutke said that he would buy Amazon in 2029, but because of the market size and potential, probably underrated before the pandemic as Alexandre Dewez analyzed earlier this year.
By the way, if you want to know Tobi better, I recommend you to read the following interview. There are lots of useful insights inside.
☑️ Personality Type Tests
I recently saw the article A practical guide to working remotely with all 16 personality types about the Myers-Briggs test and personalities and how it could be translated to remote work. This article appeared first in the Atlassian blog.
If the truth is told, this test is not science, or as some psychologists say, it is junk science. Read this Adam Grant article from Psychology Today about MBTI. As a summary, we all probably agree to recognize that four letters don't do justice to anyone's identity.
I had already written about these kinds of personality tests and categorization in my blog, trying to learn a little about them. And I will stand up for these tests if they open the door of thinking about others' personalities, try to advance our impact on each of them, and consider the blind spots. I believe this is the real usage, not just sticking into four letters that could change in time.
Anyway, as Adam also shares, other personality tests, far from perfect, were built using the scientific method. This is the case of the Big Five. The Big Five traits have high reliability and considerable power in predicting job performance and team effectiveness.
You could find the test following this link, and I encourage you to do it, think about the result, and why not share it here!
🐦 Some Tweets that Deserve Attention
Stay humble. Challenge everything you believe.
I love this risky but fantastic approach from Jean-Michel Lemieux, Shopify CTO, to receive first-hand feedback from customers.
And without leaving Jean-Michel Lemieux, here is a Twitter thread that shares his 25/50/25 leadership model: 25% of my time, I’m your boss. 50% of my time, I’m your peer. And then 25% of the time, I work for you.
I couldn’t forget to include one of the several threads from Lenny Rachitsky, this time about team velocity. Check it out.
What about Dan Rose’s experiences. It is impossible to wrap up this Twitter thread in a phrase. Please read. And if you like it, follow him. He started sharing stories this year, and they are gold.
Trevor categorized the tweets that got traction. Even some of the tweets included deserve a read.
Here‘s a short piece of advice for public speaking from Kelsey Hightower.
And as a newsletter amateur writer, I am always interested in increasing subscribers, and MorningBrew knows about it.
Thanks for reading. I hope you liked it!
Please, do not hesitate to add any comments. I am open to any suggestions, so if you want to add a comment or contact me, I encourage you to do it.
Also, if you liked the content, and you think it could like others, please share!