This piece of work was developed in the first year of my PhD as an effort to reflect on how platforms moderate content and whether they take context into account. I look at the content moderation policy of Instagram, drawing particular attention to  the case of #belarecatadaedolar.

˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚ ˚

In May 2016, a serious case of sexual assault shocked the world, forcing Twitter to block users and remove a 40-second video in which the content exhibited a naked and semi-conscious Brazilian teenager who was allegedly raped by 30 men. A photo of a man posing next to the youngster with exposed genitals was also deleted. Before being removed, the images got 500 likes. Brazilians quickly responded expressing their disapproval using #EstuproNuncaMais [No More Rape] and protesting in social media to support the cause. In this case, the position taken by Twitter was the expectable one. However, and according to Julia Mora-Blanco (former Twitter’s manager of User Safety Policy) and Jacob Hoffman-Andrews (former Twitter engineer), Twitter did not have, prior to 2014, engineers dedicated to addressing harassment at the platform. Since then, Twitter has committed itself to the need to prevent abuse[i] “publicly announcing the formation of an advisory council and introducing training sessions with law enforcement” (Buni and Chemaly, 2016).

Two more examples: i) an anonymous YouTube moderator was instructed to take down videos showing murder, beatings and drug-related violence coming from Mexico, whereas all videos related to political violence in Syria and Russia should remain live (Buni and Chemaly, 2016); and, ii) the acceptability of breastfeeding photos on Facebook in mid-2014 [ii] – a decision made in response to users’ pressure and their constant campaigns on social media. Nonetheless, posts of women exposing their nipples are still out of the question (men are allowed to do that), even if they embrace political or social claims. What constitutes appropriate speech in the public domain seems to rely on political significance, newsworthiness [iii] (Buni and Chemaly, 2016) or cultural perspectives that are far from being consensual. Who decides what is significant or newsworthiness, and under which circumstances?

Catherine Buni and Soraya Chemaly [iv] (2016, p. 14) state that the view on social media practices to moderate content is still blurred, because these platforms “do not publish details of their internal content moderation guidelines; no major platform has made such guidelines public”. Among other consequences, content moderation impacts on free speech and on the shaping of social norms; “what flagged content should be removed? Who decides what stays and why? What constitutes newsworthiness? Threat? Harm?” (Buni and Chemaly, 2016, p. 9).

Let me now introduce the Instagram’s terms of use [v], which tries to advance clear reasons to flag content, warning that users may not post “violent, nude, partially nude, discriminatory, unlawful, infringing, hateful, pornographic or sexually suggestive photos” (Instagram, 2013), though the platform may but has “no obligation to remove, edit, block, and/or monitor Content or accounts containing” (Instagram, 2013) anything that violates the Terms of Use.

To make matters worse, there is an incoherent statement in the section “Things you should know” [vi]: on the one side, Instagram has the power to “change, suspend, or discontinue the availability of any Instagram APIs at any time” and also reserves the right “to charge fees for future use of or access to the Instagram APIs”; but on the other, it states that although the platform APIs are owned by Instagram and are licensed to users on a worldwide, “User Content is owned by users and not by Instagram” (Instagram n.d.). To finalize this overview, I turn to the Instagram Help Centre in the section «Learn how to address abuse» [vii]. Here, Instagram encourages users to consider the context:

When you see an upsetting post, caption or comment on Instagram, consider the larger conversation it may be connected to. Step back and determine the context of the post. Many people use Instagram in ways that are specific only to our service, which can create some confusion when something is taken out of context. Have you checked hashtags associated with the post? It’s possible that the post is part of a trend or may be referring to something that’s not obvious. Review all information related to the post or the full profile of the person who shared the post to understand the whole story. (Instagram n.d.)

Regarding the relation between Instagram’s content moderation policy, I was particularly intrigued by the backlash against the articles of Isto é and Veja magazines in Brazil. In short, the cover of Isto é magazine [viii] on April 6 2016 depicted a fierce, exacerbated and furious president (see below) with the headline: “The nerve bursts of President Dilma”. The article provoked a flood of negative reactions, in particular within feminist movements. After only a few days, on April 18, 2016, an article signed by Juliana Linhares in Veja magazine [ix] stressed the stereotyped and ideal model of woman represented by Marcela Temer – wife of Brazil’s interim president at the time, Michel Temer (see Figure 5). The news headline was “Bela, Recatada e do Lar” [Pretty, discrete and homely], and criticisms of this stereotyped portrait popped up everywhere (newspapers, social media, blogs, etc.) A simple search on Google Images using #belarecatadaedolar brings loads of memes showing women pretending to be naughty or immoral as a form of confronting male chauvinism and the frivolity of the article.

#belarecatadaedolar, does context matter?

The artist and dancer Isabella Maia, with the username bella_maia_[x] on Instagram, immediately presented her counter-response posting a nude artistic photo, and just like many others she used the hashtag #belarecatadaedolar (see above). The general policy terms of Instagram [xi] say users should not “confuse, deceive, defraud, mislead, or harass anyone”; they must be transparent about their identity and must not “violate any rights of any person”, and in 2015 it instituted harsher rules to control harassment, nudity and porn, along with the condemnation of terrorism exaltation, organized crime or hate groups. However, even if Isabella Maia has not harassed anyone, defrauded or elicited terrorist practices, her post was removed (see above) because it contained nudity [xii].




[iii] For instance, in 2009 the Policy Team of YouTube decided to keep a video depicting a 26-year-old woman, called Neda Agha-Soltan, who had been struck by a bullet to the chest during pro-democracy demonstrations in Iran. To Buni and Chemaly (2016, April 13) the decision was based on what constituted “ethical journalism (newsworthiness) and political significance”.

[iv] Buni and Chemaly (2016, April 4) also highlight the role of moderators in holding “a powerful impact on free speech, government dissent, the shaping of social norms, user safety, and the meaning of privacy” as well as severe health costs to moderators (e.g. depression, anxiety, relational difficulties) because they are constantly exposed to toxic images or “the worst of humanity”.


[vi] In section C of Instagram Platform Policy:




[x] This profile has been deleted and Isabella is now adopting the user name isa.e.bella.


[xii] Isabella Maia’s post was also removed from her Facebook profile, and she was also banned the access of her Facebook account for 24 hours.