Ancora sulla (al momento impossibile da ottenere) qualificazione delle piattaforme social come State Actors ai fini del Primo Emendamento (libertà di parola)

Altra sentenza (d’appello stavolta) che rigetta la domadna vs. Facebook (rectius, Meta) basata sul fatto che illegalmente filtrerebbe/censurerebbe i post o rimuoverebbe gli account , violando il Primo Emendamento (libertà di parola).

Questo diritto spetta solo verso lo Stato o verso chi agisce in suo nome o assieme ad esso.

Si tratta della sentenza di appello del 9° circuito (su impugnazione di una sentenza californiana confermata) ,  emessa il 22.11.2021, No. 20-17489 , D.C. No. 3:20-cv-05546-RS, Atkinson c. Meta-Zuckerberg.

Sono riproposte dall’utente (e la Corte partitamente rigetta) tutte le consuete e note causae petendi in tema.   Nulla di nuovo ma un utile loro ripasso.

Inoltre la Corte conferma pure l’applicazione del safe harbour ex  230 CDA.

(notizia e link alla sentenza dal blog di Eric Goldman)

Interessante sentenza dagli USA sulla chiusura immotivata da parte di Facebook dell’account di un’utente

Si tratta della corte del nord california 12 noiv. 2021, 21cv04573EMC , King v-. Facebbok (dal blog di Eric Goldman).

Il provveidmento interessa, dato che la chiusura immotivata di account FB pare non sia così rara.

L’attrice avanza varie domande (una basata sul § 230.c.2.A CDA : incomprensibile, visto che , la disposizione esime da responsabilità anzichè comminarla!, p. 4 segg.)

Qui ricordo la domanda sub E, p. 10 ss basata sulla violazione contrattuale ex fide bona e correttezza.

Rigettata quella sulla distruzione di contenuto (sub 1: non condivisibelmente però: se manca obbligo specifico per F. di conservare, quanto meno la buona fede impone di dare congruo preavviso della prossima distruzione), viene accolta quella sulla mancanza di motivazione,. sub 2, p. 12 ss

F. si basa sulla pattuita clausola <<If we determine that you have clearly, seriously or repeatedly breached our Terms or Policies, including in particular our Community Standards, we may suspend or permanently disable access to your account.>> per affermare che aveva piena discrezionalità

Il giudice ha buon gioco però nel dire che non è così: <<Notably, the Terms of Service did not include language providing that Facebook had “sole discretion” to act.  Compare, e.g., Chen v. PayPal, Inc., 61 Cal. App. 5th 559, 570-71 (2021) (noting that contract provisions allowed “PayPal to place a hold on a payment or on a certain amount in a seller’s account when it ‘believes there may be a high level of risk’ associated with a transaction or the account[,] [a]nd per the express terms of the contract, it may do so ‘at its sole discretion’”; although plaintiffs alleged that “‘there was never any high level of risk associated with any of the accounts of any’ appellants, . . . this ignores that the user agreement makes the decision to place a hold PayPal’s decision – and PayPal’s alone”). 

Moreover, by providing a standard by which to evaluate whether an account should be disabled, the Terms of Service suggest that Facebook’s discretion to disable an account is to be guided by the articulated factors and cannot be entirely arbitrary.  Cf. Block v. Cmty. Nutrition Ins., 467 U.S. 340, 349, 351 (1984) (stating that the “presumption favoring judicial review of administrative action . . . may be overcome by specific language or specific legislative history that is a reliable indicator of congressional intent” – i.e., “whenever the congressional intent to preclude judicial review is ‘fairly discernible in the statutory scheme’”). 

At the very least, there is a strong argument that the implied covenant of good faith and fair dealing imposes ome limitation on the exercise of discretion so as to not entirely eviscerate users’ rights>>

Inoltre (sub 3, p. 14) quanto meot una spiegazione era dovuta. (i passaggi sub 2 e il 3 si sovrappontgono)

In breve sono ritenute illegittime la disbilitgazione e la mancanza di motivazione (che si soprappongono, come appena detto: la reciproca distinzione concettuale richiederebbe troppo spazio e tempo)

Da ultimo, l’ovvia eccezione di safe harbour ex § 230.c.1 CDA <Treatment of publisher or speaker> copre la disabilitazione ma non la mancata spiegazione (p. 22).

Sul secondo punto c’è poco da discutere: il giudice ha ragione.

Più difficile rispondere sul primo,  importante nella pratica, dato che qualunque disabilitazione costituirà -dal punto del disabilitato- una violazione di contratto.

Il giudice dà ragione a F.: il fatto che esista un patto, non toglie a F. il safe harbour : <<although Ms. King’s position is not without any merit, she has glossed over the nature of the “promise” that Facebook made in its Terms of Service. In the Terms of Service, Facebook simply stated that it would use its discretion to determine whether an account should be disabled based on certain standards. The Court is not convinced that Facebook’s statement that it would exercise its publishing discretion constitutes a waiver of the CDA immunity based on publishing discretion. In other words, all that Facebook did here was to incorporate into the contract (the Terms of Service) its right to act as a publisher. This by itself is not enough to take Facebook outside of the protection the CDA gives to “‘paradigmatic editorial decisions not to publish particular content.’” Murphy, 60 Cal. App. 5th at 29. Unlike the very specific one-time promise made in Barnes, the promise relied upon here is indistinguishable from “‘paradigmatic editorial decisions not to publish particular content.’” Id. It makes little sense from the perspective of policy underpinning the CDA to strip Facebook of otherwise applicable CDA immunity simply because Facebook stated its discretion as a publisher in its Terms of Service>>.

Decisione forse esatta sul punto specifico, ma servirebbe analisi ulteriore.

Corresponsabilità delle puiattaforme digitali per la strage di Orlando (Florida, USA) del 2016? No

Nella strage di Orlando USA del 2016 Omar Mateen uccise 49 persone e ne ferì 53 con un fucile semiautomatico, inneggiando all’ISIS.

Le vittime proposero domanda giudiziale contro Twitter Google e Facebook sia in base Anti-Terrorism Act, 18 U.S.C. §§ 2333(a) & (d)(2) (è respponsabile chi , by facilitating his access to radical jihadist and ISIS-sponsored content in the months and years leading up to the shooting) sia per legge statale, avendo cagionato  negligent infliction of emotional distress and wrongful death.

La cit. legge ATA imposes civil liability on “any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed . . . an act of international terrorism,” provided that the “act of international terrorism” is “committed, planned, or authorized” by a designated “foreign terrorist organization.

Nega ogni responsabilità in capo alle piattaforme (confermando la sentenza di primo grado della Florida) la corte d’appello dell’11° circuito 27.09.2021, No. 20-11283 , Colon ed altri c. Twitter-Facebook-Google.

La prima domanda è respinta sia perchè non si trattò di terrorismo internazionale (pur se reclamato dal’lISIS), come richiede la cit legge, sia perchè non fu una foreign terroristic organization a commetterlo (ma un c.d. lupo solitario).

Ma soprattutto è rigettata la seconda domanda (negligenza nel causare danni e decdessi) : gli attori non hanno superato la prova della proximate causation circa il ruolo delle puiattaforme, sub IV.A, p. 21 ss

La corte parla si del nesso di causalità ma in astratto e in base ai precedenti, senza applicarlo al ruolo delle piattaforme nella commissione di delitti.

La corte stranamente non menziona il safe harbour ex 230 CDA che avrebbe potuto essere invocato (cosa che quasi certanente le piattafirme avranno fatto)

(notizia e link dal blog di Eric Goldman)

La responsabilità degli internet provider per violazioni IP: quella della piattaforma Cloudfare è negata

Secondo i titolari di diritto di autore su vestiti da nozze l’avvalersi della piattaforma Cloudfare per vendere prodotti contraffatti fa sorgere anche responsabilità di questa.

Lo nega la corte del nord Californa Case 3:19-cv-01356-VC del 6 ottobre 2021, MON CHERI BRIDALS c. Cloudfare.

secondo gli attori, <Cloudflare contributes to the underlying copyright infringement by providing infringers with caching, content delivery, and security services.>
Ma il controibutory infringement ricorre solo se <it “(1) has knowledge of
another’s infringement and (2) either (a) materially contributes to or (b) induces that infringement>.

la corte osserva: <Simply providing services to a copyright infringer does not qualify as a “material contribution.” Id. at 79798. Rather, liability in the internet context follows where a party “facilitate[s] access” to infringing websites in such a way that “significantly magnif[ies]” the underlying infringement. Perfect 10, Inc. v. Amazon.com, Inc., 508 F.3d 1146, 1172 (9th Cir. 2007); see A&M Records, Inc. v. Napster, Inc., 239 F.3d 1004, 1022 (9th Cir. 2001). A party can also materially contribute to copyright infringement by acting as “an essential step in the infringement process.” Louis Vuitton Malletier, S.A. v. Akanoc Solutions, Inc., 658 F.3d 936,  94344 (9th Cir. 2011) (quoting Visa International, 494 F.3d at 812 (Kozinski, J., dissenting)). >

Pertanto rigetta la domanda.

1 – Gli attori non hanno dato prova per cui una giuria possa dire <that Cloudflare’s performance-improvement services materially contribute to copyright infringement. The plaintiffs’ only evidence of the effects of these services is promotional material from Cloudflare’s website touting the benefits of its services. These general statements do not speak to the effects of Cloudflare on the direct infringement at issue here. For example, the plaintiffs have not offered any evidence that faster load times (assuming they were faster) would be likely to lead to significantly more infringement than would occur without Cloudflare. Without such evidence, no reasonable jury could find that Cloudflare “significantly magnif[ies]” the underlying infringement. Amazon.com, Inc., 508 F.3d at 1172. Nor are Cloudflare’s services an “essential step in the infringement process.” Louis Vuitton Malletier, 658 F.3d at 944. If Cloudflare were to remove the infringing material from its cache, the copyrighted image would still be visible to the user; removing material from a cache without removing it from the hosting server would not prevent the direct infringement from occurring. >

La questione della specificità (v. parole in rosso)  è importante -spesso decisiva- anche nel ns. ordinameto sul medesimo problema.

  1. nè Clouddfare rende più difficile l’0individuazione della contraffazione: <Cloudflare’s security services also do not materially contribute to infringement. From the perspective of a user accessing the infringing websites, these services make no difference. Cloudflare’s security services do impact the ability of third parties to identify a website’s hosting provider and the IP address of the server on which it resides. If Cloudflare’s provision of these services made it more difficult for a third party to report incidents of infringement to the web host as part of an effort to get the underlying content taken down, perhaps it could be liable for contributory infringement. But here, the parties agree that Cloudflare informs complainants of the identity of the host in response to receiving a copyright complaint, in addition to forwarding the complaint along to the host provider>.

Stranamente non si menziona la preliminare di rito (o pregiudiziale di merito?) della carenza di azione ex saharbour § 230 CDA: pareva invocabile.

(notizia e link alla sentenza dal blog di Eric Goldman)

La proprietà intellettuale, cui non si applica il safe harbour ex 230 CDA, comprende pure il right of publicity

Una giornalista vede la propria immagine riprodotta illecitamente in Facebook e nel social Imgur, cui portava un link presente in Reddit.

Cita tute le piattaforme per violazione del right of publicity (r.o.f.) ma queste invocano il § 230 CDA.

Il quale però non si applica alla intellectual property (IP) (§ 230.e.2).

Per le piattaforme il right of publicity è altro dall ‘ IP e dunque il safe harbour può operre.

La pensa allo stesso modo il giudice di primo grado.

Per la corte di appello del 3° circuito, invece, vi rientra appieno: quindi il safe harbour non opera (sentenza Hepp c. Facebook, Reddit, Imgur e altri, N° 202725 & 2885, 23.09.2021)

I dizionari -legali e non- alla voce <intellectual property> indirettamente comprendono il r.o.f. (p. 18-19): spt. perchè vi includono i marchi, cui il r.o.f. va assimilato.

(sub D infine il collegio si premura di chiarire che non ci saranno conseguenze disastrose da questa presa di posizine, apparentemente contro la comunicazione in internet via piattaforme)

(testo e link alla sentenza dal blog di Eric Goldman)

Raccolta, a fini di successiva vendita, di informazioni personali altrui: right of publicity e safe harbour ex 230 CDA

La corte distrettuale del Nord California, 16.08.2021, 21cv01418EMC , Cat Brooks e altri c. THOMSON REUTERS CORPORATION (poi, TR), decide la lite iniziata dai primi per raccolta e sucessiva vendita a terzi di loro dati personali.

Il colosso dell’informazione TR , data broker, raccoglieva e vendeva informazioni altrui a imprese interessate (si tratta della piattaforma CLEAR).

Precisamente: Thomson Reuters “aggregates both public and nonpublic information about millions of people” to create “detailed cradletograve dossiers on each person, including names, photographs, criminal history, relatives, associates, financial information, and employment information.” See Docket No. 11 (Compl.) ⁋ 2. Other than publicly available information on social networks, blogs, and even chat rooms, Thomson Reuters also pulls “information from thirdparty data brokers and law enforcement agencies that are not available to the general public, including live cell phone records, location data from billions of license plate detections, realtime booking information from thousands of facilities, and millions of historical arrest records and intake photos.”

1) Tra le vari causae petendi, considero il right of publicity.

La domanda è rigettata non tanto perchè non ricorra l’uso (come allegato da TR) , quanto perchè non ricorre l'<Appropriation of Plaintiffs’ Name or Likeness For A Commercial Advantage>: Although the publishing of Plaintiffs’ most private and intimate information for profit might be a gross invasion of their privacy, it is not a misappropriation of their name or likeness to advertise or promote a separate product or servic, p. 8.

2) safe harbour ex § 230 CDA, invocato da TR

Dei tre requisiti necessari (“(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a
publisher or speaker (3) of information provided by another information content
provider.”
), TR non ha provato la ricorrenza del 2 e del 3.

Quanto al 2, la giurisprudenza insegna che <<a plaintiff seeks to treat an interactive computer service as a “publisher or speaker” under § 230(c)(1) only when it is asking that service to “review[], edit[], and decid[e] whether to publish or withdraw from publication thirdparty content.” Id. (quoting Barnes, 570 F.3d at 1102). Here, Plaintiffs are not seeking to hold Thomson Reuters liable “as the publisher or speaker” because they are not asking it to monitor thirdparty content; they are asking to moderate its own conten>>

Quanto al requisito 3, l’informazione non è fornita da terzi ma da TR: the “information” at issue herethe dossiers with Plaintiffs’ personal informationis not “provided by another information content provider.” 47 U.S.C. § 230(c)(1). In Roomates.com, the panel explained that § 230 was passed by Congress to “immunize[] providers of interactive computer services against liability arising from content created by third parties.” 521 F.3d at 1162 (emphasis added). The whole point was to allow those providers to “perform some editing on usergenerated content without thereby becoming liable for all defamatory or otherwise unlawful messages that they didn’t edit or delete. In other  words, Congress sought to immunize the removal of usergenerated content, not the creation of content.” Id. at 1163 (emphases added). Here, there is no usergenerated contentThomson Reuters generates all the dossiers with Plaintiffs’ personal information that is posted on the CLEAR platform. See Compl. ⁋⁋ 13. In other words, Thomson Reuter is the “information content provider” of the CLEAR dossiers because it is “responsible, in whole or in part, for the creation or development of” those dossiers. 47 U.S.C. § 230(f)(3). It is nothing like the paradigm of an interactive computer service that permits posting of content by third parties.

Discriminazione nelle ricerche di alloggi via Facebook: manca la prova

Una domanda di accertamento di violazione del Fair Housing Act e altre leggi analoghe statali (carenza di esiti – o ingiustificata differenza di esiti rispetto ad altro soggetto di diversa etnia- dalle ricerche presuntivamente perchè eseguite da account di etnia c.d. Latina) è rigettata per carenza di prova.

Da noi si v. spt. il d. lgs. 9 luglio 2003 n. 216 e  il d . lgs. di pari data n° 215 (autore di riferimento sul tema è il prof. Daniele Maffeis in moltri scritti tra cui questo).

Nel mondo anglosassone , soprattutto statunitense, c’è un’enormità di scritti sul tema: si v. ad es. Rebecca Kelly Slaughter-Janice Kopec & Mohamad Batal, Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission, Yale Journal of Law & Technology

Il giudice così scrive:

<In sum, what the plaintiffs have alleged is that they each used Facebook to search for housing based on identified criteria and that no results were returned that met their criteria. They assume (but plead no facts to support) that no results were returned because unidentified advertisers theoretically used Facebook’s Targeting Ad tools to exclude them based on their protected class statuses from seeing paid Ads for housing that they assume (again ,with no facts alleged in support) were available and would have otherwise met their criteria. Plaintiffs’ claim  that Facebook denied them access to unidentified Ads is the sort of generalized grievance that is insufficient to support standing. See, e.g., Carroll v. Nakatani, 342 F.3d 934, 940 (9th Cir. 2003) (“The Supreme Court has repeatedly refused to recognize a generalized grievance against allegedly illegal government conduct as sufficient to confer standing” and when “a government  actor discriminates on the basis of race, the resulting injury ‘accords a basis for standing only to those persons who are personally denied equal treatment.’” (quoting Allen v. Wright, 468 U.S. 737, 755 (1984)).9 Having failed to plead facts supporting a plausible injury in fact sufficient to confer standing on any plaintiff, the TAC is DISMISSED with prejudice>.

Così il Northern District of California 20 agosto 2021, Case 3:19-cv-05081-WHO , Vargas c. Facebook .

Il quale poi dice che anche rigattando quanto sorpa, F. srebbe protetta dal safe harbour ex § 230 CDA e ciò nonostante il noto precedente Roommates del 2008, dal quale il caso sub iudice si differenzia:

<<Roommates is materially distinguishable from this case based on plaintiffs’ allegations in the TAC that the nowdefunct Ad Targeting process was made available by Facebook for optional use by advertisers placing a host of different types of paidadvertisements.10 Unlike in Roommates where use of the discriminatory criteria was mandated, here use of the tools was neither mandated nor inherently discriminatory given the design of the tools for use by a wide variety of advertisers.

In Dyroff, the Ninth Circuit concluded that tools created by the website creator there, “recommendations and notifications” the website sent to users based on the users inquiries that ultimately connected a drug dealer and a drug purchaser did not turn the defendant who ontrolled the website into a content creator unshielded by CDA immunity. The panel confirmed that the tools were “meant to facilitate the communication and content of others. They are not content in and of themselves.” Dyroff, 934 F.3d 1093, 1098 (9th Cir. 2019), cert. denied, 140 S. Ct. 2761 (2020); see also Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1124 (9th Cir. 2003) (where website “questionnaire facilitated the expression of information by individual users” including proposing sexually suggestive phrases that could facilitate the development of libelous profiles, but left “selection of the content [] exclusively to the user,” and defendant was not “responsible, even in part, for associating certain multiple choice responses with a set of physical characteristics, a group of essay answers, and a photograph,” website operator was not information content provider falling outside Section 230’s immunity); Goddard v. Google, Inc., 640 F. Supp. 2d 1193, 1197 (N.D. Cal. 2009) (no liability based on Google’s use of “Keyword Tool,” that  employs “an algorithm to suggest specific keywords to advertisers”).  

Here, the Ad Tools are neutral. It is the users “that ultimately determine what content to  post, such that the tool merely provides ‘a framework that could be utilized for proper or improper  purposes, . . . .’” Roommates, 521 F.3d at 1172 (analyzing Carafano). Therefore, even if the plaintiffs could allege facts supporting a plausible injury, their claims are barred by Section 230.>>

(notizia e link alla sentenza dal blog di Eric Goldman)

Azione in corte di Trump contro i colossi digitali che lo esclusero dai social (ancora su social networks e Primo Emendamento)

Techdirt.com pubblica l’atto di citazione di Trump 7 luglio 2021 contro Facebook (Fb)   che nei mesi scorsi lo bannò.  E’ una class action.

Il link diretto è qui .

L’atto è interessante e qui ricordo solo alcuni punti sull’annosa questione del rapporto social networks/primo emendamento.

Nella introduction c’è la sintesi di tutta l’allegazione, pp. 1-4.

A p. 6 ss trovi descrizione del funzionamneot di Fb e dei social: interessa spt. l’allegazione di coordinamento tra Fb e Tw, § 34 e la piattaforma CENTRA per il monitoraggio degli utenti completo cioè  anche circa la loro attività su altre piattaforme ,  § 36 ss. .

 Alle parti III-IV-V l’allegazione sul coordinamenot (anche forzoso, sub III, § 56)  tra Stato  Federale e piattaforme.  Il che vale a preparare il punto centrale seguente: l’azione di Fb costituisce <State action> e dunque non può censurare il free speech:

<<In censoring the specific speech at issue in this lawsuit and deplatforming Plaintiff, Defendants were acting in concert with federal officials, including officials at the CDC and the Biden transition team. 151.As such, Defendants’ censorship activities amount to state action. 152.Defendants’ censoring the Plaintiff’s Facebook account, as well as those Putative Class Members, violates the First Amendment to the United States Constitution because it eliminates the Plaintiffs and Class Member’s participation in a public forum and the right to communicate to others their content and point of view. 153.Defendants’ censoring of the Plaintiff and Putative Class Members from their Facebook accounts violates the First Amendment because it imposes viewpoint and content-based restrictions on the Plaintiffs’ and Putative Class Members’ access to information, views, and content otherwise available to the general public. 154.Defendants’ censoring of the Plaintiff and Putative Class Members violates the First Amendment because it imposes a prior restraint on free speech and has a chilling effect on social media Users and non-Users alike. 155.Defendants’ blocking of the Individual and Class Plaintiffs from their Facebook accounts violates the First Amendment because it imposes a viewpoint and content-based restriction on the Plaintiff and Putative Class Members’ ability to petition the government for redress of grievances. 156.Defendants’ censorship of the Plaintiff and Putative Class Members from their Facebook accounts violates the First Amendment because it imposes a viewpoint and content-based restriction on their ability to speak and the public’s right to hear and respond. 157.Defendants’ blocking the Plaintiff and Putative Class Members from their Facebook accounts violates their First Amendment rights to free speech. 158.Defendants’ censoring of Plaintiff by banning Plaintiff from his Facebook account while exercising his free speech as President of the United States was an egregious violation of the First Amendment.>> (al § 159 ss sul ruolo di Zuckerberg personalmente).

Ne segue che il safe harbour ex § 230 CDA è incostituzionale:

<<167.Congress cannot lawfully induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.” Norwood v. Harrison, 413 US 455, 465 (1973). 168.Section 230(c)(2) is therefore unconstitutional on its face, and Section 230(c)(1) is likewise unconstitutional insofar as it has interpreted to immunize social media companies for action they take to censor constitutionally protected speech. 169.Section 230(c)(2) on its face, as well as Section 230(c)(1) when interpreted as described above, are also subject to heightened First Amendment scrutiny as content- and viewpoint-based regulations authorizing and encouraging large social media companies to censor constitutionally protected speech on the basis of its supposedly objectionable content and viewpoint. See Denver Area Educational Telecommunications Consortium, Inc. v. FCC, 518 U.S. 727 (1996).170.Such heightened scrutiny cannot be satisfied here because Section 230 is not narrowly tailored, but rather a blank check issued to private companies holding unprecedented power over the content of public discourse to censor constitutionally protected speech with impunity, resulting in a grave threat to the freedom of expression and to democracy itself; because the word “objectionable” in Section 230 is so ill-defined, vague and capacious that it results in systematic viewpoint-based censorship of political speech, rather than merely the protection of children from obscene or sexually explicit speech as was its original intent; because Section 230 purports to immunize social media companies for censoring speech on the basis of viewpoint, not merely content; because Section 230 has turned a handful of private behemoth companies into “ministries of truth” and into the arbiters of what information and viewpoints can and cannot be uttered or heard by hundreds of millions of Americans; and because the legitimate interests behind Section 230 could have been served through far less speech-restrictive measures. 171.Accordingly, Plaintiff, on behalf of himself and the Class, seeks a declaration that Section 230(c)(1) and (c)(2) are unconstitutional insofar as they purport to immunize from liability social media companies and other Internet platforms for actions they take to censor constitutionally protected speech>>.

Come annunciato, ha fatto partire anche analoghe azioni verso Twitter e verso Google/Youtube e rispettivi amministratori delegati (rispettivi link  offerti da www.theverge.com) .

Safe harbour per Youtube circa la diffusione di immagini di persona fisica

La corte di Dallas 17.05.21, KANDANCE A. WELLS c. Youtube, civil action No. 3:20-CV-2849-S-BH, decide una domanda giudiziale risarcitoria (per dollari 504.000,00) basata sulla illecita diffusione (da parte di terzi utenti) della propria immagine, finalizzata alla minacaccia personale.

Diverse erano le leggi invocate come violate.

Immancabilmente Y. eccepisce il safe harbour ex § 230 CDA , unico aspetti qui esaminato.

La corte accoglie l’eccezione e giustamente.

Esamina i consueti tre requisiti e come al solito il più interssante è il terzo (che la domanda tratti il convenuto come publisher o speaker): <<Plaintiff is suing Defendant for “violations to [her] personal safety as a generalconsumer” under the CPSA, the FTCA, and the “statutes preventing unfair competition, deceptiveacts under tort law, and/or the deregulation of trade/trade practices” based on the allegedlyderogatory image of her that is posted on Defendant’s website. (See doc. 3 at 1.) All her claimsagainst Defendant treat it as the publisher of that image. See, e.g., Hinton, 72 F. Supp. 3d at 690(quoting MySpace, 528 F.3d at 418) (“[T]he Court finds that all of the Plaintiff’s claims againsteBay arise or ‘stem[ ] from the [ ] publication of information [on www.ebay.com] created by thirdparties….’”); Klayman, 753 F.3d at 1359 (“[I]ndeed, the very essence of publishing is making thedecision whether to print or retract a given piece of content—the very actions for which Klaymanseeks to hold Facebook liable.”). Accordingly, the third and final element is satisfied>>.

(notizia e link alla sentenza dal blog di Eric Goldman)