Notre instance Nitter est hébergée dans l'Union Européenne. Les lois de l'UE s'y appliquent. Conformément à la Directive 2001/29/CE du Parlement européen et du Conseil du 22 mai 2001 sur l'harmonisation de certains aspects du droit d'auteur et des droits voisins dans la société de l'information, « Les actes de reproduction provisoires visés à l'article 2, qui sont transitoires ou accessoires et constituent une partie intégrante et essentielle d'un procédé technique et dont l'unique finalité est de permettre : une transmission dans un réseau entre tiers par un intermédiaire, […] d'une oeuvre ou d'un objet protégé, et qui n'ont pas de signification économique indépendante, sont exemptés du droit de reproduction. » Aussi, toutes les demandes de retrait doivent être envoyées à Twitter, car nous n'avons aucun contrôle sur les données qu'ils ont sur leurs serveurs.

THREAD: THE TWITTER FILES PART TWO. TWITTER’S SECRET BLACKLISTS.
13,505
86,156
14,817
244,055
1. A new #TwitterFiles investigation reveals that teams of Twitter employees build blacklists, prevent disfavored tweets from trending, and actively limit the visibility of entire accounts or even trending topics—all in secret, without informing users.
4,201
30,712
4,452
120,228
2. Twitter once had a mission “to give everyone the power to create and share ideas and information instantly, without barriers.” Along the way, barriers nevertheless were erected.
976
12,655
297
80,031
3. Take, for example, Stanford’s Dr. Jay Bhattacharya (@DrJBhattacharya) who argued that Covid lockdowns would harm children. Twitter secretly placed him on a “Trends Blacklist,” which prevented his tweets from trending.
2,137
28,892
2,711
111,379
4. Or consider the popular right-wing talk show host, Dan Bongino (@dbongino), who at one point was slapped with a “Search Blacklist.”
1,789
17,551
915
82,744
5. Twitter set the account of conservative activist Charlie Kirk (@charliekirk11) to “Do Not Amplify.”
2,100
17,759
1,356
83,349
6. Twitter denied that it does such things. In 2018, Twitter's Vijaya Gadde (then Head of Legal Policy and Trust) and Kayvon Beykpour (Head of Product) said: “We do not shadow ban.” They added: “And we certainly don’t shadow ban based on political viewpoints or ideology.”
1,916
18,394
1,621
97,674
7. What many people call “shadow banning,” Twitter executives and employees call “Visibility Filtering” or “VF.” Multiple high-level sources confirmed its meaning.
893
14,762
842
81,811
8. “Think about visibility filtering as being a way for us to suppress what people see to different levels. It’s a very powerful tool,” one senior Twitter employee told us.

Dec 9, 2022 · 12:43 AM UTC

753
12,585
391
73,102
9. “VF” refers to Twitter’s control over user visibility. It used VF to block searches of individual users; to limit the scope of a particular tweet’s discoverability; to block select users’ posts from ever appearing on the “trending” page; and from inclusion in hashtag searches.
544
12,335
541
68,674
10. All without users’ knowledge.
682
10,381
314
67,448
11. “We control visibility quite a bit. And we control the amplification of your content quite a bit. And normal people do not know how much we do,” one Twitter engineer told us. Two additional Twitter employees confirmed.
815
13,553
698
71,676
12. The group that decided whether to limit the reach of certain users was the Strategic Response Team - Global Escalation Team, or SRT-GET. It often handled up to 200 "cases" a day.
781
10,700
372
62,769
13. But there existed a level beyond official ticketing, beyond the rank-and-file moderators following the company’s policy on paper. That is the “Site Integrity Policy, Policy Escalation Support,” known as “SIP-PES.”
403
9,532
231
57,450
14. This secret group included Head of Legal, Policy, and Trust (Vijaya Gadde), the Global Head of Trust & Safety (Yoel Roth), subsequent CEOs Jack Dorsey and Parag Agrawal, and others.
1,560
12,762
1,153
66,143
15. This is where the biggest, most politically sensitive decisions got made. “Think high follower account, controversial,” another Twitter employee told us. For these “there would be no ticket or anything.”
561
10,207
211
60,445
16. One of the accounts that rose to this level of scrutiny was @libsoftiktok—an account that was on the “Trends Blacklist” and was designated as “Do Not Take Action on User Without Consulting With SIP-PES.”
1,567
15,536
1,524
74,053
17. The account—which Chaya Raichik began in November 2020 and now boasts over 1.4 million followers—was subjected to six suspensions in 2022 alone, Raichik says. Each time, Raichik was blocked from posting for as long as a week.
541
9,583
232
60,064
18. Twitter repeatedly informed Raichik that she had been suspended for violating Twitter’s policy against “hateful conduct.”
585
8,368
155
55,237
19. But in an internal SIP-PES memo from October 2022, after her seventh suspension, the committee acknowledged that “LTT has not directly engaged in behavior violative of the Hateful Conduct policy." See here:
820
11,426
806
62,217
20. The committee justified her suspensions internally by claiming her posts encouraged online harassment of “hospitals and medical providers” by insinuating “that gender-affirming healthcare is equivalent to child abuse or grooming.”
994
9,396
517
58,542
21. Compare this to what happened when Raichik herself was doxxed on November 21, 2022. A photo of her home with her address was posted in a tweet that has garnered more than 10,000 likes.
912
9,638
372
58,749
22. When Raichik told Twitter that her address had been disseminated she says Twitter Support responded with this message: "We reviewed the reported content, and didn't find it to be in violation of the Twitter rules." No action was taken. The doxxing tweet is still up.
1,325
12,085
743
63,605
23. In internal Slack messages, Twitter employees spoke of using technicalities to restrict the visibility of tweets and subjects. Here’s Yoel Roth, Twitter’s then Global Head of Trust & Safety, in a direct message to a colleague in early 2021:
470
8,304
306
48,863
24. Six days later, in a direct message with an employee on the Health, Misinformation, Privacy, and Identity research team, Roth requested more research to support expanding “non-removal policy interventions like disabling engagements and deamplification/visibility filtering.”
719
8,346
514
47,874
25. Roth wrote: “The hypothesis underlying much of what we’ve implemented is that if exposure to, e.g., misinformation directly causes harm, we should use remediations that reduce exposure, and limiting the spread/virality of content is a good way to do that.”
491
7,176
265
45,496
26. He added: “We got Jack on board with implementing this for civic integrity in the near term, but we’re going to need to make a more robust case to get this into our repertoire of policy remediations – especially for other policy domains.”
932
7,737
483
46,845
27. There is more to come on this story, which was reported by @abigailshrier @shellenbergermd @nelliebowles @isaacgrafstein and the team The Free Press @thefp. Keep up with this unfolding story here and at our brand new website: thefp.com.
627
9,222
267
51,171
28. The authors have broad and expanding access to Twitter’s files. The only condition we agreed to was that the material would first be published on Twitter.
744
6,951
209
49,894
29. We're just getting started on our reporting. Documents cannot tell the whole story here. A big thank you to everyone who has spoken to us so far. If you are a current or former Twitter employee, we'd love to hear from you. Please write to: tips@thefp.com
768
7,993
180
55,231
30. Watch @mtaibbi for the next installment.
2,477
7,788
276
57,493