Annexes to COM(2017)555 - Tackling Illegal Content Online - Towards an enhanced responsibility of online platforms

Please note

This page contains a limited version of this dossier in the EU Monitor.

agreements."

For terrorist content32, an EU Internet Referral Unit (IRU) has been established at Europol, whereby security experts assess and refer terrorist content to online platform (while some Member States have their own national IRUs).

Online platforms should systematically enhance their cooperation with competent authorities in Member States, while Member States should ensure that courts are able to effectively react against illegal content online, as well as stronger (cross-border) cooperation between authorities.

Online platforms and law enforcement or other competent authorities should appoint effective points of contact in the EU, and where appropriate define effective digital interfaces to facilitate their interaction.

Platforms and law enforcement authorities are also encouraged to develop technical interfaces that allow them to cooperate more effectively in the entire content governance cycle. Cooperation also with the technical community can be beneficial in advancing towards effective and technically sound solutions to this challenge.

3.2.        Notices

3.2.1.      Trusted flaggers

The removal of illegal content online happens more quickly and reliably where online platforms put in place mechanisms to facilitate a privileged channel for those notice providers which offer particular expertise in notifying the presence of potentially illegal content on their website. These are so-called "trusted flaggers", as specialised entities with specific expertise in identifying illegal content, and dedicated structures for detecting and identifying such content online.

Compared to ordinary users, trusted flaggers can be expected to bring their expertise and work with high quality standards, which should result in higher quality notices and faster take-downs. Online platforms are encouraged to make use of existing networks of trusted flaggers. For instance, for terrorist content, the Europol's Internet Referral Unit has the necessary expertise to assess whether a given content constitutes terrorist and violent extremist online content, and uses this expertise to act as a trusted flagger, besides its law enforcement role. The INHOPE network of hotlines for reporting child sexual abuse material is another example of a trusted flagger; for illegal hate speech content, civil society organisations or semi-public bodies are specialised in the identification and reporting of illegal online racist and xenophobic content.

In order to ensure a high quality of notices and faster removal of illegal content, criteria based notably on respect for fundamental rights and of democratic values could be agreed by the industry at EU level. This can be done through self-regulatory mechanisms or within the EU standardisation framework, under which a particular entity can be considered a trusted flagger, allowing for sufficient flexibility to take account of content-specific characteristics and the role of the trusted flagger. Other such criteria could include internal training standards, process standards and

32 In the sense of Article 5 of Directive (EU) 2017/541 on combating terrorism.

quality assurance, as well as legal safeguards as regards independence, conflicts of interest, protection of privacy and personal data, as a non-exhaustive list. These safeguards are particularly important in the limited number of cases where platforms may remove content upon notification from the trusted flagger without further verifying the legality of the content themselves. In these limited cases, trusted flaggers could also be made auditable against these criteria, and a certification scheme could attest the trusted status. In all cases, sufficient safeguards should be available to prevent abuse of the system, as outlined in section 4.3.

Competent authorities should be offered the possibility to participate in the trusted-flagger reporting mechanisms, where relevant.

A reasonable balance needs to be struck between ensuring a high quality of notices coming from trusted flaggers, the scope of additional measures that companies would take in relation to trusted flaggers and the burden in ensuring these quality standards. Where there are abuses of trusted flagger mechanisms against established standards, the privilege of a trusted flagger status should be removed.

The Commission encourages the close cooperation between online platforms and trusted flaggers. Notices from trusted flaggers should be able to be fast-tracked by the platform. This cooperation should provide for mutual information exchange so as to evaluate and improve the removal process over time.

The Commission will further explore, in particular in dialogues with the relevant stakeholders, the potential of agreeing EU-wide criteria for trusted flaggers.

3.2.2.      Notices by users

In the effective fight against illegal content online, ordinary users should be empowered to signal illegal content to online platforms and have confidence that justified notices will be considered and acted upon swiftly.

Online platforms should establish an easily accessible and user-friendly mechanism that allows their users to notify content considered to be illegal and which the platforms host.

Where the content is publicly available, such reporting mechanisms should also be available to the general public, without needing to be signed-in as a user. To improve the efficiency and accuracy of the assessment of potentially illegal content, such mechanism should allow for easy notification by electronic means.

The Commission's proposal on the revision of the Audiovisual Media Services Directive aims to create an obligation on video-sharing platform providers to establish and operate mechanisms for users to report or flag audio-visual content which may impair the physical, mental or moral development of minors, as well as content containing incitement to violence or hatred.

3.2.3.      Ensuring the high quality of notices

Online platforms should put in place effective mechanisms to facilitate the submission of notices that are sufficiently precise and adequately substantiated to enable the platforms to take a swift

and informed decision about the follow-up. This should facilitate the provision of notices that contain an explanation of the reasons why the notice provider considers the content illegal and a clear indication of the location of the potentially illegal content (e.g. the URL address).

Such reporting mechanisms should be visible, easily accessible, user-friendly and contextual. They should also allow for easy reporting of different content types, e.g. by selection from a list of categories of reasons for which the content is considered illegal. Where technically feasible, elements such as allowing notifications to be made immediately in the context of first encounter of the material or offering the reuse of sign-in credentials can be used.

Such sufficiently substantiated and detailed notice enables the platform to find the potentially illegal content quickly, make a sound assessment of the illegality of the content, and act expeditiously where appropriate. The exact level of detail required by platforms to expeditiously take informed decisions can vary considerably from one type of content to the other.

Users should normally not be obliged to identify themselves when reporting what they consider illegal content, unless this information is required to determine the legality of the content (e.g., asserting ownership for intellectual property rights (IPR)). This is especially the case where their safety can be at risk or where revealing one's identity could have legal implications. Users should be encouraged to raise their notification via trusted flaggers, where these exist, whenever they wish to maintain anonymity vis-à-vis platforms.

However, notice providers should have the opportunity to voluntarily submit their contact details in a notification, in order to allow the online platform to ask for additional information or to inform the notice provider about any intended follow-up. In that case, the notice provider should receive a confirmation of receipt and a communication indicating the follow-up given to the notification.

A confirmation of receipt does not only avoid that the notice provider has to check whether his/her request has been followed-up on, but can also serve as evidence in judicial or out-of-court proceedings in accordance with the rules applicable to such proceedings.

3.3.        Proactive measures by online platforms

3.3.1.      Proactive measures and the liability exemption

Online platforms should, in light of their central role and capabilities and their associated responsibilities, adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices which they receive. Moreover, for certain categories of illegal content, it may not be possible to fully achieve the aim of reducing the risk of serious harm without platforms taking such proactive measures.

The Commission considers that taking such voluntary, proactive measures does not automatically lead to the online platform losing the benefit of the liability exemption provided for in Article 14 of the E-Commerce Directive.

Firstly, this liability exemption is only available to providers of ‘hosting’ services who meet the conditions set out in Article 14 of that Directive; such service providers are those whose activities

consist of the storage of information at the request of third parties and which do not play an active role of such a kind as to give it knowledge of, or control over, that information.33

Recital 38 of the Commission’s proposal for a Directive on copyright in the Digital Single Market of 14 September 2016 states in this regard: "In respect of Article 14 [of the E-Commerce Directive], it is necessary to verify whether the service provider plays an active role, including by optimising the presentation of the uploaded works or subject-matter or promoting them, irrespective of the nature of means used therefore".

Specifically in respect of Article 14 of the E-Commerce Directive, in the case L'Oréal v eBay, the Court of Justice clarified that “the mere fact that [an online platform] stores offers for sale on its server, sets the terms of its service, is remunerated for that service and provides general information to its customers cannot have the effect of denying it the exemptions from liability provided for by [Article 14 of the E-Commerce Directive]”.34 However, there is such an effect, the Court ruled, “[w]here, by contrast, the [online platform] has provided assistance which entail, in particular optimising the presentation of the offers for sale in question or promoting those offers”.35

This suggests that the mere fact that an online platform takes certain measures relating to the provision of its services in a general manner does not necessarily mean that it plays an active role in respect of the individual content items it stores and that the online platform cannot benefit from the liability exemption for that reason. In the view of the Commission, such measures can; and indeed should, also include proactive measures to detect and remove illegal content online, particularly where those measures are taken as part of the application of the terms of services of the online platform. This will be in line with the balance between the different interests at stake which the E-Commerce Directive seeks to achieve.36 Indeed, it recalls that it is in the interest of all parties involved to adopt and implement rapid and reliable procedures for removing and disabling access to illegal information.37 Although that Directive precludes online platforms from being obliged to engage in general active fact-finding,38 it also acknowledges the importance of voluntary measures.39

Secondly, in accordance with Article 14 of the E-Commerce Directive service providers falling within the scope of that provision can only benefit from the liability exemption on two conditions, namely: (a) they do not have actual knowledge of the illegal activity or information and, as regards claims for damages, are not aware of facts or circumstances from which the illegal activity or information is apparent or (b), upon obtaining such knowledge or awareness, they act expeditiously to remove or to disable access to the information.

33 Recital 42 of the E-Commerce Directive. See Google France, 114 and 120; Judgment of 12 July 2011, Case C‑324/09, L'Oréal v eBay, para. 113.

34 eBay 115

35 eBay 116

36 Recital 41 of the E-Commerce Directive.

37 Recital 40 of the E-Commerce Directive.

38 Article 15(1) of the E-Commerce Directive.

39 Recital 40 of the E-Commerce Directive.

The Court of Justice has clarified that these conditions cover “every situation in which the [online platform] concerned becomes aware, in one way or another, of [facts and circumstances on the basis of which a diligent economic operator should have identified the illegality in question]” and that this includes - besides notification by a third party - the situation where the platform “uncovers, as the result of an investigation undertaken on its own initiative, an illegal activity or illegal information”.40

It follows that proactive measures taken by an online platform to detect and remove illegal content may result in that platform obtaining knowledge or awareness of illegal activities or illegal information, which could thus lead to the loss of the liability exemption in accordance with point (a) of Article 14(1) of the E-Commerce Directive. However, in such cases the online platform continues to have the possibility to act expeditiously to remove or to disable access to the information in question upon obtaining such knowledge or awareness. Where it does so, the online platform continues to benefit from the liability exemption pursuant to point (b) of Article 14(1). Therefore, concerns related to losing the benefit of the liability exemption should not deter or preclude the application of the effective proactive voluntary measures that this Communication seeks to encourage.

3.3.2. Using technology to detect illegal content

Given the volume of material intermediated by online platforms, as well as technological progress in information processing and machine intelligence, the use of automatic detection and filtering technologies is becoming an ever more important tool in the fight against illegal content online. Many large platforms are now making use of some form of matching algorithms, based on a range of technologies, from simple metadata filtering, to hashing and fingerprinting content.

The E-Commerce Directive clarifies that the provisions relating to liability do not preclude the development and effective operation of technical systems of protection and identification and of technical surveillance instruments made possible by digital technology.41 As the Directive also makes clear, such operation must however take place within the limits of the applicable rules of EU and national law, in particular on the protection of privacy and personal data and the prohibition on Member States to impose general monitoring obligations.42

Sector-specific legislations can set mandatory rules for online platforms to take measures e.g. on copyright to help ensure the detection and removal of illegal content, also when they are eligible for the liability exemption provided in Article 14 of the E-Commerce Directive.

More generally, the use and further development of such technology is encouraged in particular

when serious harm is at stake, as called for by European Council Conclusions on 22 June 201743. Automatic tools and filters can be used to identify potentially infringing content and private and

40 eBay 120-121

41 Recital 40 of the E-Commerce Directive.

42 Recital 40 and Article 15(1) of the E-Commerce Directive.

http://www.consilium.europa.eu/en/press/press-releases/2017/06/22-euco-security-defence/

public research is advancing in developing such tools. For instance, in the field of copyright, automatic content recognition has proven an effective tool for several years.

The Commission supports further research and innovative approaches going beyond the state of the art with the objective of improving the accuracy of technical means to identify illegal content44. It also encourages industry to ensure an effective uptake of innovations which may contribute to increased efficiency and effectiveness of automatic detection procedures.

In most cases, current best industry practice is to use automatic tools to narrow down the set of contentious content for vetting by human experts, who then may need to assess the illegal nature of such content. This human-in-the-loop principle is, in general, an important element of automatic procedures that seek to determine the illegality of a given content, especially in areas where error rates are high or where contextualisation is necessary.

The Commission is of the view that proactive measures taken by those online platforms which fall under Article 14 of the E-commerce Directive to detect and remove illegal content which they host – including the use of automatic tools and tools meant to ensure that previously removed content is not re-uploaded – do not in and of themselves lead to a loss of the liability exemption.

In particular, the taking of such measures need not imply that the online platform concerned plays an active role which would no longer allow it to benefit from that exemption. Whenever the taking of such measures lead to the online platform obtaining actual knowledge or awareness of illegal activities or illegal information, it needs to act expeditiously to remove or to disable access to the illegal information in question to satisfy the condition for the continued availability of that exemption.

Online platforms should do their utmost to proactively detect, identify and remove illegal content online. The Commission strongly encourages online platforms to use voluntary, proactive measures aimed at the detection and removal of illegal content and to step up cooperation and investment in, and use of, automatic detection technologies

4. Removing Illegal content

It is in the entire society's interest that platforms should remove illegal content as fast as possible. At the same time, removal of such content should not impede the prosecution of or other follow-up to any underlying breach of law. Evidence sharing amongst public authorities and online platforms is an important policy in this regard. Cross-border access to evidence should be facilitated by the

44 Current R&I efforts deployed by industry are directed towards the development of analytical tools for a better understanding of natural language, information cascades in social networks, the identification of sources of information, dissemination patterns and fake identities. The Commission has also supported R&I in this field by funding projects aimed at developing automatic verification tools to check the veracity of user generated content on social networks. These tools may help the identification of potential falsehoods in texts, images or videos and support the tracking of fake news. However the establishment of the illegal nature of such content goes beyond their current functional capabilities.

forthcoming legislative initiative on this issue45. The removal of illegal content by platforms should not affect investigations into or the prosecution of offences based on Union or national law.

Robust safeguards to limit the risk of removal of legal content also should be available, supported by a set of meaningful transparency obligations to increase accountability of the removal processes.

4.1.        Ensuring expeditious removal and reporting crime to law enforcement authorities

The E-Commerce Directive requires online platforms to act "expeditiously" to remove illegal content after they have obtained knowledge thereof, if they wish to continue to benefit from the liability exemption. What this means in practice depends on the specifics of the case at hand, in particular the type of illegal content, the accuracy of the notice and the potential damage caused.

In practice, different content types require a different amount of contextual information to determine the legality of a given content item. For instance, while it is easier to determine the illegal nature of child sexual abuse material, the determination of the illegality of defamatory statements generally requires careful analysis of the context in which it was made.

Where serious harm is at stake, for instance in cases of incitement to terrorism acts, fast removal is particularly important and can be subject to specific timeframes.

Some voluntary processes such as the Code of Conduct on countering illegal hate speech online have provided indicative targets for removal times, in the case of this Code of Conduct, 24 hours for the majority of cases.

Fully automated deletion or suspension of content can be particularly effective and should be applied where the circumstances leave little doubt about the illegality of the material, e.g. in cases of material whose removal is notified by law enforcement authorities, or of known illegal content which has previously been removed subject to the safeguards referred to in Section 4.3.

As a general rule, removal deriving from trusted flagger notices should be addressed more quickly, due to the quality and accuracy of the information provided in the notice and the trusted status of the flaggers.

In cases where economic damage is at stake due to infringing intellectual property right, the potential economic damage arising from such an infringement may be closely related to the speed of its removal.

Clear reporting by platforms about the time taken for processing takedown requests according to the type of content will facilitate the assessment of the expeditiousness of the action taken and increase the wider accountability of platforms.

In certain cases, especially where online platforms find it difficult to assess the legality of a particular content item and it concerns a potentially contentious decision, they could benefit from submitting

45 See https://ec.europa.eu/info/law/better-regulation/initiatives/ares-2017-3896097_en and https://ec.europa.eu/home-affairs/what-we-do/policies/organized-crime-and-human-trafficking/e-evidence_en for more information.

cases of doubt to a third party to obtain advice. Self-regulatory bodies or competent authorities play this role in different Member States. As part of the reinforced cooperation between online platforms and competent authorities, such cooperation is strongly encouraged.

Finally online platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences in order to alert and enable the relevant authorities to investigate and prosecute individuals generating such content or the abuse of the services by organised criminal or terrorist groups. In doing so, they should comply with the applicable legal requirements, including Regulation (EU) 2016/679 on the protection of personal data46., This may also be appropriate in cases of offers and sales of products and commercial practices that are non-compliant with EU legislation.

The need to cooperate with law enforcement authorities in the investigation and prosecution of crimes may also in some cases lead to platforms abstaining from removing the illegal content at hand, when this is required in the framework of a specific investigation underway, under close supervision by national authorities and in full compliance with national criminal procedure rules.

Law enforcement authorities should build up the necessary capacity to take appropriate action on these reports.47 A best practice example concerning points of contact is the SIRIUS portal48 established by Europol to support Member States in online counter terrorism investigations, including facilitating co-operation between platforms and EU law enforcement 49

In accordance with Article 14 of the E-Commerce Directive, online platforms must take down illegal content expeditiously once they are made or become aware of its existence if they wish to be exempt from liability. Particularly fast removal is important in the case of illegal content where serious harm is at stake, for instance in cases of incitement to terrorism acts. Removal times and procedure for different forms of illegal content should be clearly reported in transparency reports.

The issue of fixed timeframes for removal will be further analysed by the Commission.

Evidence of criminal offences obtained in the context of illegal content removal should be transmitted to law enforcement authorities, provided this is in compliance in particular with the requirements laid down in Regulation (EU) 2016/679, especially the lawful grounds for processing personal data.

46 Article 6(1)(c) in conjunction with Article 6(4).

47 According to Article 15(2) of the E-Commerce Directive, "Member States may establish obligations for information society service providers promptly to inform the competent public authorities of alleged illegal activities undertaken or information provided by recipients of their service or obligations to communicate to the competent authorities, at their request, information enabling the identification of recipients of their service with whom they have storage agreements."

48 Shaping Internet Research Investigations Unified System

49 Europol will further facilitate the creation of new Single Point of Contacts by providing relevant trainings to law enforcement authorities in countries where SPOCs are not yet established.

4.2.        Enhancing transparency

4.2.1.      Transparency on the online platforms' content policy

The question of whether content is legal or illegal is governed by EU and national laws. At the same time the online platforms' own terms of service can consider specific types of content undesirable or objectionable.

Online platforms should disclose their detailed content policies in their terms of service and clearly communicate this to their users. These terms should not only define the policy for removing or disabling access to content, but also spell out the safeguards that ensure that content-related measures do not lead to over-removal. In particular, online platforms' terms of service should clearly spell out any possibility for the users to contest removal decisions as part of an enhanced transparency of the platforms' general removal policies. This should help reduce the potential negative effect on the users' fundamental right to freedom of expression and information.50

Online platforms should provide a clear, easily understandable and sufficiently detailed explanation of their content policy in their terms of service. These should reflect both the treatment of illegal content, and content which does not respect the platform's terms of service. All restrictions on the kind of content permitted on a particular platform should be clearly stated and communicated to their users. This explanation should also cover the procedures in place to contest removal decisions, including those triggered by trusted flaggers.

4.2.2.      Transparency on notice-and-action procedures

Transparency reporting should also cover the outcome of the application of the platforms' content management policies.

Online platforms should publish transparency reports with sufficiently detailed information on the number and type of notices received and actions taken, as well as the time taken for processing, and the source of the notification51. These reports should also include information on counter notices, if any, and the response given to these. The Commission encourages the publication of this information on a regular basis and at least once per year.

Taking due account of content-specific differences, these transparency reports would benefit from some standardisation across the Digital Single Market. This would allow for better monitoring, facilitate the electronic aggregation of such information and could help avoid unnecessary barriers to the cross-border provision of hosting services.

Special attention should be paid to enable smaller online platforms and SMEs to provide such transparency data, and any supporting activity such as standardisation should ensure that administrative burdens are kept to a minimum.

50 In case personal data are being processed, platforms shall ensure transparent privacy policies according to Article 12 of the General Data Protection Regulation.

51 Reporting on own investigations, general user notices, notices by law enforcement authorities, etc.

The Commission will further explore, in structured dialogues with the industry, the potential of standardisation with regard to notification procedures and transparency reporting about removal systems and outcomes.

4.3.        Safeguards against over-removal and abuse of the system

Expeditious action, including upload-filtering measures or automated detection aimed at ensuring the prompt removal of illegal content, in particular where there is no "human in the loop", can affect the accuracy of the decision, including the risk that legal content is removed. Therefore it is important to ensure that sufficient safeguards are available so that content which was erroneously removed can be reinstated.

4.3.1.      Contesting a notice

In general, those who provided the content should be given the opportunity to contest this decision via a counter-notice. This is also valid when content removal has been automated.

For example, according to Article 28a of the proposal to amend the AVMSD, Member States have to ensure that complaint and redress mechanisms are available for the settlement of disputes between users and video-sharing platforms relating to the application of the appropriate measures to be taken by those platforms.

If the counter-notice has provided reasonable grounds to consider that the notified activity or information is not illegal, the platform provider should restore the content that was removed without undue delay or allow for the re-upload by the user, without prejudice to the platform's terms of service.

The possibility to contest decisions should lead to a decrease in the number of unjustified removals of legal content and could equally supply documentary evidence for out-of-court dispute resolution mechanisms or to judicial appeal procedures.

In certain circumstances, informing the content provider and/or allowing for a counter-notice would not be appropriate – in particular in cases where this would interfere in the investigative powers of Member States’ authorities necessary for the prevention, detection and prosecution of criminal offences, such as in the case of child sexual abuse material.

Online platform should offer simple online counter-notice procedures. When a counter-notice is filed, online platforms should provide a reply, and in case of a negative decision the reasons should be specified. When available in the Member State concerned, platforms are encouraged to allow the use of out-of-court dispute settlement bodies to resolve disputes about counter-notices.

4.3.2.      Measures against bad-faith notices and counter-notices

At the same time, notice-and-action procedures can sometimes be abused with bad practices or in bad faith52. These practices should be strongly discouraged, for instance by demoting the treatment in priority of notices from a notice provider who sends a high rate of invalid notices or receives a high number of counter-notices, or by revoking the trusted flagger status, according to well-established and transparent criteria. These policies should also be clearly described in the terms of service of an online platform, and be part of the general transparency reporting of online platforms, to increase public accountability. Similar measures should be put in place regarding abusive counter-notices.

5. Preventing the Re-Appearance of illegal content

Illegal content, once detected and taken down, should not re-appear online. Efficient and effective prevention of re-appearance based on existing good practices as well as on appropriate safeguards is essential to a well-functioning system. Preventing known illegal material from being disseminated across platforms requires closer co-operation between online service providers, in full respect of the applicable rules of competition law. It is also important to increase the cooperation by law enforcement authorities with small, less resilient companies, who may become the preferred platform of choice by criminals and other persons involved in infringing activities online if they are deemed more vulnerable than others.

5.1.        Measures against repeat infringers

In order to avoid the re-appearance of illegal content by users posting infringing content of the same nature over and over, many online platforms have already put in place measures against repeat infringers, such as the suspension or termination of accounts or shadow-banning measures.53

Online platforms should take measures which dissuade users from repeatedly uploading illegal content of the same nature and aim to effectively disrupt the dissemination of such illegal content.

This should also apply where the infringer is the same and the substance of the content in question is of the same nature and, where justified, the user has been promptly notified about the notice(s) received against him/her and about the forthcoming suspension or termination. This would allow the user to contest the decision and facilitate access to judicial redress against the measure, if appropriate under the contract between the user and the platform and the applicable law. In such cases, too, any processing of personal data must fully respect the relevant data protection rules.

Once again, no information or notification of the content provider should be provided where this would interfere in the investigative powers of Member States necessary for the prevention, detection and prosecution of criminal offences, when the necessary legal basis is provided.

52 Evidence suggest that such information is used by competitors (Notice and Takedown in Everyday Practice, J Urban et al., UC Berkeley, 2016), and that the practice of automated creation of notices has been abused to link to artificially created content (Google Transparency report).

53 "Shadow banning" is the act of blocking a user from an online community such that the user does not realise that they have been banned.

5.2.        Automatic re-upload filters

Besides the technologies used to identify potentially illegal content mentioned in Section 3.3, technological tools can be used with a higher degree of reliability to fingerprint and filter out (take down and stay down) content which has been already identified and assessed as illegal. The Commission therefore strongly encourages the further use of such tools subject to safeguards such as reversibility and exceptions as outlined below.

This is what currently takes place with the “Database of Hashes” being used in relation to terrorist content developed under the EU Internet Forum, or in the field of copyright, or for child sexual abuse material, but also for products which have been flagged by law enforcement authorities as non-compliant with relevant legislation. These practices have shown good results. However, their effectiveness depends on further improvements to limit erroneous identification of content and to facilitate context-aware decisions, as well as the necessary reversibility safeguards.

For instance, basing itself on practice in the field of copyright in the area of automatic content recognition, the Commission proposal on copyright in the Digital Single Market recognises such technologies – as long as they are appropriate and proportionate – as a possible means, inter alia, of preventing the availability of non-licensed content on the relevant online services.

Automatic stay-down procedures should allow for context-related exceptions and when content that has been removed is changed and brought into conformity with legal or other requirements.The scope and timing of context-related exceptions should take into account the specific nature of the content and any related security threat, as well as the possibility of temporarily suspending such content pending a more in-depth appraisal.

The Commission strongly encourages the further use and development of automatic technologies to prevent the re-appearance of illegal content online.

Where automatic tools are used to prevent re-appearance of illegal content a reversibility safeguard should be available for erroneous decisions, and the use and performance of this technology should be made transparent in the platforms' terms of service.

Access to databases that are used to automatically match and identify reappearing illegal content should be available to all online platforms, subject to compliance of any processing operation with applicable legislation on the protection of personal data and competition. Privacy policies of companies should include transparent information on processing of personal data in case of such databases.

Online platforms should also ensure continuous update of their tools, to ensure all illegal content is captured, in line with changing tactics and behaviour of criminals and other persons involved in infringing activities online. In the case of tools used for terrorist content, these should be adapted to capture new and historical content, ensure its swift review and removal. Such content should be added to cross-platform tools, such as the mentioned Database of Hashes (currently being used in relation to terrorist content). Such technological development should be carried out in cooperation between online platforms, competent authorities and other stakeholders, including civil society.

6. Conclusions

The increase in illegal content hosted by online platforms creates real harm in society, including risks to our citizens' integrity, dignity and health; if not properly addressed, such harm will also undermine trust in digital services more broadly speaking, and ultimately in the Digital Single Market – a key engine of innovation, growth and jobs. Even if such content is created and uploaded by third parties, the constantly rising influence of online platforms in society, which flows from their role as gatekeepers to content and information, increases their responsibilities towards their users and society at large. They should therefore be proactive in weeding out illegal content, preventing its reappearance, put effective notice-and-action procedures in place, and establish well-functioning interfaces with third parties (such as trusted flaggers) and give a particular priority to notices from national law enforcement authorities. Where online platforms decide which content should be considered illegal, in accordance with the law, adequate checks-and-balances should be put in place.

This Communication provides guidance and does not as such change the applicable legal framework or contain legally binding rules; it primarily aims to guide online platforms on the ways in which they can live up to their responsibility as regards tackling the illegal content they host. It also aims to mainstream good procedural practices across different forms of illegal content, to promote closer cooperation between platforms and competent authorities. As such it outlines a European approach to address illegal content for online platforms, combining the need for fast and effective removals of illegal content and prevention and prosecution of crimes with safeguarding the right to free speech online. This guidance will complement and reinforce the ongoing sector-specific dialogues.

Special attention should be given to ensure that smaller online platforms are able to implement such procedures, and many elements of this Communication have been conceived bearing in mind their specific needs. Nonetheless, the Commission will explore further means to support take-up of the guidance for smaller platforms, too.

The Digital Single Market requires greater coherence of public policy responses across geographical borders. With this Communication, the Commission is therefore, as a first step, providing common tools to address the shared challenge of illegal content removal.

The letter of intent of 13 September 2017 by the President of the European Commission, addressed to the President of the European Parliament and the President of the Council of the European Union, in order to ensure an area of Justice and Fundamental Rights based on mutual trust announced further measures to ensure the swift and proactive detection and removal of illegal content inciting hatred, violence and terrorism. This Communication constitutes a first element of such measures. The Commission expects online platforms to take swift action over the coming months, including in the context of relevant dialogues, in particular in the area of terrorism and illegal hate speech.

The Commission will continue exchanges and dialogues with online platforms and other relevant stakeholders. It will monitor progress and assess whether additional measures are needed, in order to ensure the swift and proactive detection and removal of illegal content online, including possible legislative measures to complement the existing regulatory framework. This work will be completed by May 2018.