The DSA Newsletter #5

The last four months in the DSA news feel like two years. The DSA entered into full force in February, the Board already managed to meet three times, the Commission launched five additional investigations (AliExpress, Tiktok [2x], Facebook and Instagram), filed countless requests for more information, designated new VLOP(s), issued Guidance on elections, consulted on the Delegated Act on Transparency reports, and much more. This is your irregular recap. This time of December 23 to April 24 period.

European Commission’s activities

It is an understatement to say that the European Commission has been busy. The European Commission launched several DSA investigations. The table below provides an overview

I don’t think I have time to go into details of every single investigation but at least a couple of observations:

One interesting thing is that all five investigations concern not only special obligations but also standard obligations that are not an exclusive competence of the EC. This means that in theory the DSCs of the country of origin (Ireland and the Netherlands) could have been investigating some of these potential violations too. Now, the EC has assumed that competence (Article 66(2)). The Commission might be aiming to set the thinking on some of these provisions.

In terms of substance, the scope of investigations is rather wide-ranging. Two issues seem universal though. Five services seem to have issues with researchers’ access to public data via scraping or APIs (Article 40(12)). Five of them seem to have some shortcomings when it comes to the assessments of risks (Article 34). Four services seem to have an issue with interfaces for notices or their handling (Article 16), and three with advertising archives (Article 39). Everything else is a bit specific to each service. For X/Twitter, the risk mitigation doubts revolve around illegal content (esp. hate speech and terrorist content), but also efficiency of Community notes as tool against manipulation (see my note in N4). The EC’s decision explains the investigation as follows:

“The effectiveness of the mitigation measures put in place as regards those systemic risks also appear inadequate, notably in the absence of well-defined and objectively verifiable performance metrics, and suspicions linked to insufficient resources dedicated to mitigation measures. These suspected failures concern, in particular the functioning of X’s “Community Notes” system, as well as the consistent application of X’s platform manipulation policy, as set out in its terms and conditions and the effectiveness of mitigation measures linked to subscription products, such as the blue checkmark”.

For TikTok, the doubt about risk mitigation concerns the potential rabbit-hole effects and addictiveness of the TikTok algorithm. Here is how the explanatory note formulates the suspicion: “the Commission is assessing TikTok’s risk management related to its (i) potentially addictive design, (ii) the service’s risk of leading users down ‘rabbit holes’ of harmful content, and (iii) the effectiveness of TikTok’s measures for preventing minors from accessing inappropriate content, in particular age assurance tools.” This was extended later to cover also “the DSA obligation to conduct and submit a risk assessment report prior to deploying functionalities, in this case the “Task and Reward Lite” program, that are likely to have a critical impact on systemic risks. In particular negative effects on mental health, including minors’ mental health, especially as a result of the new feature stimulating addictive behavior.”

TikTok ended up in trouble after failing to submit a risk assessment for its new feature TikTok Lite (a program that rewards for using the app). When it turned out that the assessment wasn’t properly prepared, the Commission opened an investigation and threatened to seek interim measures (Article 70, see PR). The TikTok rather decided to suspend the rollout of the program in two days (see PR). As noted by Mathias Vermeulen, this shows the insistence on ‘Measure twice, cut once’. I think the episode is very useful as it shows to other companies what should have been obvious from the start. Significant new features require constant pretesting even outside the annual cycle (Article 34(1) “prior to deploying functionalities that are likely to have a critical impact on the risks”). It has to become part of the compliance culture. The case would have been almost perfect for the EC to seek the first interim measures, so TikTok was probably very wise to back down quickly.

For Alibaba, the risk mitigation concern is about the illegal conduct of third parties affecting consumers. There are number of intersting issues there, such as the interpretation of Articles 23, 30, 34 vs. Article 8 (the general monitoring prohibition), but I leave the commentary for when we know more facts. AliExpress’ and X’s grounds for investigation are now public (AliExpress, X).

The investigation into Meta’s Instagram/Facebook services (PR here) will clearly take the most news coverage. Some see it already as an important milestone. I prefer to wait to read the decision on the opening of the investigation and what evidence emerges (once it is published). The EC’s objection concerning de-prioritisation of political content depends on how that policy operates: is it more of an optimisation goal in the recommender system, or an actual rule down-ranking of anything that meets some definition? If the latter, which the EC’s investigation seems to imply, then the violation of Articles 14(1) and 17 are in order if it is true that FB fails to provide any explanation or redress. The objection to shutting down of Crowdtangle was a low-hanging fruit, as civil society has been complaining about this very effectively, which means that the EC will clearly have good arguments there. The user-friendliness of the notification interface is again fact-specific. So the biggest charge then is the following:

Deceptive advertisements and disinformation. The Commission suspects that Meta does not comply with DSA obligations related to addressing the dissemination of deceptive advertisements, disinformation campaigns and coordinated inauthentic behaviour in the EU. The proliferation of such content may present a risk to civic discourse, electoral processes and fundamental rights, as well as consumer protection.

Again, it is unclear what evidence the EC has on this. We will have to wait and see. It is significant that the charge seems mostly related to mitigation, i.e. failure to act against the risks, and not just to assess them. It is not yet clear if this part of the investigation builds upon the Election Guidance in any way. If it does, the Election Guidance in itself will not be sufficient to create some minimum expectations of risk mitigation. The Guidance is a list of many measures that companies can take. The Election Guidance documents the Commission’s view of what are the best practices (Article 35(3)). In this sense, any guidance issued by EC differs from Codes of Conduct which are actually adopted by the companies, and thus form a stronger evidence base for the minimum expectations of risk mitigation in the industry. I think it is possible that the Commission is trying to pre-enforce parts of the Regulation on Political Advertising (Regulation (EU) 2024/900) which will only apply from October 2025. Especially, some labelling and transparency requirements could be formulated as risk management obligations already under the DSA in some settings. This would explain why the focus is on misleading acts, and why the dark patterns provision is also thrown into the investigation (Article 25).

The information requests filed by the EC are very diverse (all of them are here). Many of them relate to Article 40(12) implementation. SNV has a nice tracker here. I think it is very good that the same questions are sent to all companies, as this creates a better understanding of where we are on compliance. It would be useful if the EC could turn some of such findings into factsheets going forward. Academics from Weizenbaum, European University Viadrina and SNV have also published their first impressions of compliance with Article 40(12) (definitely worth a read).

After the three adult site VLOPs were designated last time (see N4), the EC just designated Shein and is likely to soon designate Temu (due to reported numbers) as a VLOP. Both are Chinese marketplaces and based in Ireland (here, here).

According to the minutes and PRs from the first three Board meetings, the EC is currently finalising the draft Delegated Act on Transparency Reports (likely out in May 2024) and the draft Delegated Act on Data Acces (draft for consultation in May, and the final adoption planned for July 2024). The Commission has now also issued Guidelines on Elections and is apprently working on Guidelines about protection of minors to flesh out Article 28. These guidelines might become incredibly consequential because they will apply to hundreds if not thousands of services and not just two dozen VLOPs/VLOSEs. The same is true for another EC’s ongoing project. According to the Board minutes, the EC seems to be also working on the Trusted Flaggers guidelines. Finally, the EC is seeking to convert two codes, the Code of Practice on Disinformation and the Code of Conduct on Countering Illegal Hate Speech Online, into the DSA official Codes of Conduct. Although this will still not make the Codes legally binding, it turns them into evidence of industry best practice, which can help the Commission to set some prima facie risk mitigation expectations (see my point above). Moreover, VLOPs and their auditors will have to assess compliance explicitly against such codes.

DSA before the Court of Justice of the EU

EC succeeds in its appeal against the suspension of advertising archives

In the end of March, ECJ issued its decision on the appeal in the Amazon case (C-639/23 P(R)) dealing with interim measures suspending the public availiability of ad archives (Article 39 DSA). The Commission appealed the General Court’s grant of interim measures that allowed Amazon to avoid publishing its ad archive. The decision did not, however, suspend the obligation of Amazon to compile it. The ECJ decision is very interesting and clearly tries to sets the framework for future interim measures under the DSA, but also the DMA. Here are some key takeways:

  • ECJ starts by restating the cumulative test for interim measures (“if it is established that such an order is justified, prima facie, in fact and in law and that it is urgent in so far as, in order to avoid serious and irreparable damage to the interests of the party making the application (..) [the court must] weigh up the interests involved”, para 66)
  • ECJ finds that Amazon satisfies all but the last requirement: the balancing of interests does not speak in Amazon’s favour because the effects of publishing ad archives are not existential for Amazon’s business, and the downside can be somewhat restored, and/or ex-post compensated by money. The Court says the following about the three requirements:
  • Prima facie case: ECJ held that the “direct legal connection” between the validity of Article 39 DSA (a measure of general application) and the designation decision (a measure of individual application) cannot be treated as “lacking in seriousness” (para 111). This point is key because Article 277 TFEU only allows review of the measures of individual application, unless “direct legal connection” extends incidentally the review to measures of general application. Amazon argued that Article 39 DSA violates Articles 7, 16, 17 of the EU Charter (that is, discrimination, freedom to conduct business, and protection of intellectual property). Amazon benefits from the GC not having examined this point fully, and it is likely that once Amazon case is decided, this argument cannot be easily replicated by others. On substance, ECJ only responds by zooming in on Article 7 and 16, saying that one cannot deny “major legal disagreement” on this point, which means that Amazon has a prima facie case because at least some information can be commercially sensitive and thus covered as interference by these provisions.
  • Urgency: Amazon argued that ad archives will allow its competitors to learn about its business strategies, and will discourage advertisers from using their services as their strategies can be inferred from the archives. ECJ held that “it is clear that the harm that is liable to be suffered by Amazon due to the publication of its business secrets would differ, both in nature and in scale, according to whether the persons who acquire knowledge of those business secrets are its customers, its competitors, financial analysts, or indeed members of the general public. It would be impossible to identify the number and status of all those who in fact had knowledge of the published information and thereby assess the consequences that the publication of that information might have on Amazon’s commercial and financial interests” (para 135). The Court views at least some of such damage as irreparable (para 136). Amazon here benefits from uncertainty about what information is already in the public domain, or available to competitors. Amazon argued that its advertising market share might shrink and that advertisers might not come back even if it wins. ECJ accepts that it is arguable that some of this might materialise.
  • Balancing of interests: Amazon finally loses on the point of balancing. ECJ notes that the publication concerns continuous stream of new data, and hence if it wins the case on merits, it will be put back in the previous position. The nature of the data is not such that one-time disclosure would destroy everything. Amazon can adapt, relying on business new strategies. The Court says, “It follows that, if the decision at issue is annulled, Amazon will no longer be required to compile the repository required by that Article 39. Accordingly, it will no longer be required to keep online information relating to advertisements presented on Amazon Store or to disclose information relating to developments in its advertising campaigns or new advertising campaigns. That annulment would therefore be such as to ensure that advertisers returned to a more attractive business environment and to enable Amazon to develop new strategies in the management of its advertising activities without its competitors being able to acquaint themselves with them by means of that repository” (para 147). The Court then proceeds to argue that while some harm might be irreparable, it is not existential for Amazon, and its long-term development (para 151). Advertising is only 7% of the revenues, and the situation would end up being only temporary. Plus, the carve-out from the VLOP set of obligations could frustrate the overall goals of the DSA (paras 159-162), and give Amazon a competitive advantage (para 163). Thus, “it must be held that the interests defended by the EU legislature prevail, in the present case, over Amazon’s material interests, with the result that the balancing of interests weighs in favour of dismissing the application for interim measures” (para 164).

The overal takeways: ECJ made it more difficult (compared to the General Court) but still far from impossible to argue the suspension of Article 39 DSA as part of the designation dispute. Balancing interests can play out very differently if a VLOP is essentially an ad business like Facebook or Instagram. However, even in thoses cases, the future applicants might struggle proving a prima facie case because Amazon’s case will eventually settle the objections as to the validity of Article 39. We will soon see how the General Court applies this in WebGroup Czech Republic v Commission T-139/24 (Xvideos) and Aylo Freesites v Commission T-138/24 (Pornhub) which both seek interim measures. It is unclear if the target of these measures is the same as in Amazon, or not. We now also have case numbers for the three new designation disputes: T-138/24 (Pornhub), T-139/24 (Xvideos) and T-134/24 (Stripchat). The pleas in law are still not published in the OJ, so we don’t know the details of the arguments made. Stripchat does not seem to seek any interim measures.

Fee fights: Three companies, Meta T-55/24, TikTok T-58/24 and Zalando (here, not in OJ yet), have filed cases against the decisions about fees. According to the DSA, VLOPs and VLOSEs must pay a fee the size of which depends on the profit derived of the respective company. Zalando also filed an access to information request and follow-up lawsuit (Zalando v Commission, T-203/24). Meta’s pleas in law state that the Delegated Act on Fees goes against Article 43 DSA, and that the methodology used to calculate the monthly active users is against the DSA too. TikTok’s pleas in law are based on similar arguments as Meta’s.

Just a reminder. The methodology at stake here differs from the self-assessment undertaken by companies for the purposes of the designation. As a context, it was reported by Bloomberg that Amazon, Pinterest, Snapchat, X and Wikipedia pay zero on the supervision fees, while Alphabet and Meta to pick up three-quarters of the overall supervision bill. The reason is simple, Article 43 is capped at 0.05 % of the worldwide annual net income of the VLOP. No reported profit? Others must shoulder the overall cost of supervision. Or put differently, this is who pays 45.2 million euros (thanks Bloomberg):

This means that there are currently 9 DSA disputes pending. Moreover, the Commission has initiated four infringement proceedings for non-implementation of the DSA against Cyprus, Czechia, Poland and Slovakia. Not too bad for a law that has been in full application only since February.

Various

Dutch ACM issued a draft guidance on the DSA, as the Irish CNaM on Trusted Flaggers, and Out-of-court dispute settlement bodies. IViR held an amazing conference on the DSA in Amsterdam. The recordings of the main sessions are here. My keynote on How to (not) enforce the DSA is here. The IViR’s Observatory also has a ton of interesting new articles on ODS bodies (e.g., here or here).

Academics now started publishing also the first analysis of SoR database data. Daria Dergacheva, Vasilisa Kuznetsova, Rebecca Scharlach, Christian Katzenbach have a nice paper about 24 hours in content moderation on VLOP-size social media. They report a lot of interesting findings. One that I like a lot is the following:

Another empirical piece is by Rishabh Kaushal, Jacob van de Kerkhof, Catalina Goanta, Gerasimos Spanakis, Adriana Iamnitchi analyzes a representative sample of (131m SoRs, just wow!) submitted to the EC Transparency Database in November 2023 to study platform content moderation practices. They have a similar chart but based on much larger and broader data set:

Another of their fascinating findings is this:

And there is much more in both papers. I have to say I am very impressed how much the SoR data already reveals about the content moderation ecosystem. As I write this, SoR database already has 16.750.366.296 entries. Yes, it is a huge experiment and far from perfect, but its real-time character shows that it is interesting tool even though we don’t have the public archive of decisions themselves. A lot of fine-tuning is needed to further improve the database (which BTW should also include non-VLOP platform data going forward).

I have a new(ish) piece on the DSA’s Red Line: What the Commission Can and Cannot Do About Disinformation. The article is part of a fascinating paper symposium that should be out soon. Rachel Griffin has a new paper on the ideology behind the DSA. Max van Drunen and Arman Noroozian have a new paper about how to design data access for researchers. Alexander Peukert has a thought-provoking piece on Verfassungsblog about the EC Guidance on Elections. Integrity Institute published a piece on risk assessments.

If you have an interesting DSA paper just let me know. I will try not to forget to mention it.