Raccolta, a fini di successiva vendita, di informazioni personali altrui: right of publicity e safe harbour ex 230 CDA

La corte distrettuale del Nord California, 16.08.2021, 21cv01418EMC , Cat Brooks e altri c. THOMSON REUTERS CORPORATION (poi, TR), decide la lite iniziata dai primi per raccolta e sucessiva vendita a terzi di loro dati personali.

Il colosso dell’informazione TR , data broker, raccoglieva e vendeva informazioni altrui a imprese interessate (si tratta della piattaforma CLEAR).

Precisamente: Thomson Reuters “aggregates both public and nonpublic information about millions of people” to create “detailed cradletograve dossiers on each person, including names, photographs, criminal history, relatives, associates, financial information, and employment information.” See Docket No. 11 (Compl.) ⁋ 2. Other than publicly available information on social networks, blogs, and even chat rooms, Thomson Reuters also pulls “information from thirdparty data brokers and law enforcement agencies that are not available to the general public, including live cell phone records, location data from billions of license plate detections, realtime booking information from thousands of facilities, and millions of historical arrest records and intake photos.”

1) Tra le vari causae petendi, considero il right of publicity.

La domanda è rigettata non tanto perchè non ricorra l’uso (come allegato da TR) , quanto perchè non ricorre l'<Appropriation of Plaintiffs’ Name or Likeness For A Commercial Advantage>: Although the publishing of Plaintiffs’ most private and intimate information for profit might be a gross invasion of their privacy, it is not a misappropriation of their name or likeness to advertise or promote a separate product or servic, p. 8.

2) safe harbour ex § 230 CDA, invocato da TR

Dei tre requisiti necessari (“(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a
publisher or speaker (3) of information provided by another information content
provider.”
), TR non ha provato la ricorrenza del 2 e del 3.

Quanto al 2, la giurisprudenza insegna che <<a plaintiff seeks to treat an interactive computer service as a “publisher or speaker” under § 230(c)(1) only when it is asking that service to “review[], edit[], and decid[e] whether to publish or withdraw from publication thirdparty content.” Id. (quoting Barnes, 570 F.3d at 1102). Here, Plaintiffs are not seeking to hold Thomson Reuters liable “as the publisher or speaker” because they are not asking it to monitor thirdparty content; they are asking to moderate its own conten>>

Quanto al requisito 3, l’informazione non è fornita da terzi ma da TR: the “information” at issue herethe dossiers with Plaintiffs’ personal informationis not “provided by another information content provider.” 47 U.S.C. § 230(c)(1). In Roomates.com, the panel explained that § 230 was passed by Congress to “immunize[] providers of interactive computer services against liability arising from content created by third parties.” 521 F.3d at 1162 (emphasis added). The whole point was to allow those providers to “perform some editing on usergenerated content without thereby becoming liable for all defamatory or otherwise unlawful messages that they didn’t edit or delete. In other  words, Congress sought to immunize the removal of usergenerated content, not the creation of content.” Id. at 1163 (emphases added). Here, there is no usergenerated contentThomson Reuters generates all the dossiers with Plaintiffs’ personal information that is posted on the CLEAR platform. See Compl. ⁋⁋ 13. In other words, Thomson Reuter is the “information content provider” of the CLEAR dossiers because it is “responsible, in whole or in part, for the creation or development of” those dossiers. 47 U.S.C. § 230(f)(3). It is nothing like the paradigm of an interactive computer service that permits posting of content by third parties.

Discriminazione nelle ricerche di alloggi via Facebook: manca la prova

Una domanda di accertamento di violazione del Fair Housing Act e altre leggi analoghe statali (carenza di esiti – o ingiustificata differenza di esiti rispetto ad altro soggetto di diversa etnia- dalle ricerche presuntivamente perchè eseguite da account di etnia c.d. Latina) è rigettata per carenza di prova.

Da noi si v. spt. il d. lgs. 9 luglio 2003 n. 216 e  il d . lgs. di pari data n° 215 (autore di riferimento sul tema è il prof. Daniele Maffeis in moltri scritti tra cui questo).

Nel mondo anglosassone , soprattutto statunitense, c’è un’enormità di scritti sul tema: si v. ad es. Rebecca Kelly Slaughter-Janice Kopec & Mohamad Batal, Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission, Yale Journal of Law & Technology

Il giudice così scrive:

<In sum, what the plaintiffs have alleged is that they each used Facebook to search for housing based on identified criteria and that no results were returned that met their criteria. They assume (but plead no facts to support) that no results were returned because unidentified advertisers theoretically used Facebook’s Targeting Ad tools to exclude them based on their protected class statuses from seeing paid Ads for housing that they assume (again ,with no facts alleged in support) were available and would have otherwise met their criteria. Plaintiffs’ claim  that Facebook denied them access to unidentified Ads is the sort of generalized grievance that is insufficient to support standing. See, e.g., Carroll v. Nakatani, 342 F.3d 934, 940 (9th Cir. 2003) (“The Supreme Court has repeatedly refused to recognize a generalized grievance against allegedly illegal government conduct as sufficient to confer standing” and when “a government  actor discriminates on the basis of race, the resulting injury ‘accords a basis for standing only to those persons who are personally denied equal treatment.’” (quoting Allen v. Wright, 468 U.S. 737, 755 (1984)).9 Having failed to plead facts supporting a plausible injury in fact sufficient to confer standing on any plaintiff, the TAC is DISMISSED with prejudice>.

Così il Northern District of California 20 agosto 2021, Case 3:19-cv-05081-WHO , Vargas c. Facebook .

Il quale poi dice che anche rigattando quanto sorpa, F. srebbe protetta dal safe harbour ex § 230 CDA e ciò nonostante il noto precedente Roommates del 2008, dal quale il caso sub iudice si differenzia:

<<Roommates is materially distinguishable from this case based on plaintiffs’ allegations in the TAC that the nowdefunct Ad Targeting process was made available by Facebook for optional use by advertisers placing a host of different types of paidadvertisements.10 Unlike in Roommates where use of the discriminatory criteria was mandated, here use of the tools was neither mandated nor inherently discriminatory given the design of the tools for use by a wide variety of advertisers.

In Dyroff, the Ninth Circuit concluded that tools created by the website creator there, “recommendations and notifications” the website sent to users based on the users inquiries that ultimately connected a drug dealer and a drug purchaser did not turn the defendant who ontrolled the website into a content creator unshielded by CDA immunity. The panel confirmed that the tools were “meant to facilitate the communication and content of others. They are not content in and of themselves.” Dyroff, 934 F.3d 1093, 1098 (9th Cir. 2019), cert. denied, 140 S. Ct. 2761 (2020); see also Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1124 (9th Cir. 2003) (where website “questionnaire facilitated the expression of information by individual users” including proposing sexually suggestive phrases that could facilitate the development of libelous profiles, but left “selection of the content [] exclusively to the user,” and defendant was not “responsible, even in part, for associating certain multiple choice responses with a set of physical characteristics, a group of essay answers, and a photograph,” website operator was not information content provider falling outside Section 230’s immunity); Goddard v. Google, Inc., 640 F. Supp. 2d 1193, 1197 (N.D. Cal. 2009) (no liability based on Google’s use of “Keyword Tool,” that  employs “an algorithm to suggest specific keywords to advertisers”).  

Here, the Ad Tools are neutral. It is the users “that ultimately determine what content to  post, such that the tool merely provides ‘a framework that could be utilized for proper or improper  purposes, . . . .’” Roommates, 521 F.3d at 1172 (analyzing Carafano). Therefore, even if the plaintiffs could allege facts supporting a plausible injury, their claims are barred by Section 230.>>

(notizia e link alla sentenza dal blog di Eric Goldman)

Ancora sugli annuari on line che usano dati personali degli ex studenti

In Knapke v. Peopleconnect Inc , 10.08.2021, un Tribunale di Washington decide una lite sul right of publicity sfruttato indebitamente dall’annuario Classmates (C.) (nella fattisecie proponendo nome e immagine in niserzioni publiciitarie).

C. pubblica annuari di scuola e università, parte gratjuitamente (ma con pubblicità) e parte a pagamento.

C. si difende strenuamente ma la corte rigetta la domanda di dismiss.

E’ rigettata l’eccezione di safe harbour ex 230 CDA, trattandosi di materiale proprio e non di soggetti terzi.

Inoltre si v. le analitiche difese di C..

La più interessante è basata sul First Amendment: <<Classmates argues that “where a person’s name,  image, or likeness is used in speech for ‘informative or cultural’ purposes, the First Amendment renders the use ‘immune’ from liability.”>> (sub F).

La corte però la rigetta.

Avevo già dato notizia mesi fa di altro caso relativo agli annuari, CALLAHAN v.
ANCESTRY.COM INC..

(notizia e link tratti dal blog di Eric Goldman)

Azione in corte di Trump contro i colossi digitali che lo esclusero dai social (ancora su social networks e Primo Emendamento)

Techdirt.com pubblica l’atto di citazione di Trump 7 luglio 2021 contro Facebook (Fb)   che nei mesi scorsi lo bannò.  E’ una class action.

Il link diretto è qui .

L’atto è interessante e qui ricordo solo alcuni punti sull’annosa questione del rapporto social networks/primo emendamento.

Nella introduction c’è la sintesi di tutta l’allegazione, pp. 1-4.

A p. 6 ss trovi descrizione del funzionamneot di Fb e dei social: interessa spt. l’allegazione di coordinamento tra Fb e Tw, § 34 e la piattaforma CENTRA per il monitoraggio degli utenti completo cioè  anche circa la loro attività su altre piattaforme ,  § 36 ss. .

 Alle parti III-IV-V l’allegazione sul coordinamenot (anche forzoso, sub III, § 56)  tra Stato  Federale e piattaforme.  Il che vale a preparare il punto centrale seguente: l’azione di Fb costituisce <State action> e dunque non può censurare il free speech:

<<In censoring the specific speech at issue in this lawsuit and deplatforming Plaintiff, Defendants were acting in concert with federal officials, including officials at the CDC and the Biden transition team. 151.As such, Defendants’ censorship activities amount to state action. 152.Defendants’ censoring the Plaintiff’s Facebook account, as well as those Putative Class Members, violates the First Amendment to the United States Constitution because it eliminates the Plaintiffs and Class Member’s participation in a public forum and the right to communicate to others their content and point of view. 153.Defendants’ censoring of the Plaintiff and Putative Class Members from their Facebook accounts violates the First Amendment because it imposes viewpoint and content-based restrictions on the Plaintiffs’ and Putative Class Members’ access to information, views, and content otherwise available to the general public. 154.Defendants’ censoring of the Plaintiff and Putative Class Members violates the First Amendment because it imposes a prior restraint on free speech and has a chilling effect on social media Users and non-Users alike. 155.Defendants’ blocking of the Individual and Class Plaintiffs from their Facebook accounts violates the First Amendment because it imposes a viewpoint and content-based restriction on the Plaintiff and Putative Class Members’ ability to petition the government for redress of grievances. 156.Defendants’ censorship of the Plaintiff and Putative Class Members from their Facebook accounts violates the First Amendment because it imposes a viewpoint and content-based restriction on their ability to speak and the public’s right to hear and respond. 157.Defendants’ blocking the Plaintiff and Putative Class Members from their Facebook accounts violates their First Amendment rights to free speech. 158.Defendants’ censoring of Plaintiff by banning Plaintiff from his Facebook account while exercising his free speech as President of the United States was an egregious violation of the First Amendment.>> (al § 159 ss sul ruolo di Zuckerberg personalmente).

Ne segue che il safe harbour ex § 230 CDA è incostituzionale:

<<167.Congress cannot lawfully induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.” Norwood v. Harrison, 413 US 455, 465 (1973). 168.Section 230(c)(2) is therefore unconstitutional on its face, and Section 230(c)(1) is likewise unconstitutional insofar as it has interpreted to immunize social media companies for action they take to censor constitutionally protected speech. 169.Section 230(c)(2) on its face, as well as Section 230(c)(1) when interpreted as described above, are also subject to heightened First Amendment scrutiny as content- and viewpoint-based regulations authorizing and encouraging large social media companies to censor constitutionally protected speech on the basis of its supposedly objectionable content and viewpoint. See Denver Area Educational Telecommunications Consortium, Inc. v. FCC, 518 U.S. 727 (1996).170.Such heightened scrutiny cannot be satisfied here because Section 230 is not narrowly tailored, but rather a blank check issued to private companies holding unprecedented power over the content of public discourse to censor constitutionally protected speech with impunity, resulting in a grave threat to the freedom of expression and to democracy itself; because the word “objectionable” in Section 230 is so ill-defined, vague and capacious that it results in systematic viewpoint-based censorship of political speech, rather than merely the protection of children from obscene or sexually explicit speech as was its original intent; because Section 230 purports to immunize social media companies for censoring speech on the basis of viewpoint, not merely content; because Section 230 has turned a handful of private behemoth companies into “ministries of truth” and into the arbiters of what information and viewpoints can and cannot be uttered or heard by hundreds of millions of Americans; and because the legitimate interests behind Section 230 could have been served through far less speech-restrictive measures. 171.Accordingly, Plaintiff, on behalf of himself and the Class, seeks a declaration that Section 230(c)(1) and (c)(2) are unconstitutional insofar as they purport to immunize from liability social media companies and other Internet platforms for actions they take to censor constitutionally protected speech>>.

Come annunciato, ha fatto partire anche analoghe azioni verso Twitter e verso Google/Youtube e rispettivi amministratori delegati (rispettivi link  offerti da www.theverge.com) .

Safe harbour per Youtube circa la diffusione di immagini di persona fisica

La corte di Dallas 17.05.21, KANDANCE A. WELLS c. Youtube, civil action No. 3:20-CV-2849-S-BH, decide una domanda giudiziale risarcitoria (per dollari 504.000,00) basata sulla illecita diffusione (da parte di terzi utenti) della propria immagine, finalizzata alla minacaccia personale.

Diverse erano le leggi invocate come violate.

Immancabilmente Y. eccepisce il safe harbour ex § 230 CDA , unico aspetti qui esaminato.

La corte accoglie l’eccezione e giustamente.

Esamina i consueti tre requisiti e come al solito il più interssante è il terzo (che la domanda tratti il convenuto come publisher o speaker): <<Plaintiff is suing Defendant for “violations to [her] personal safety as a generalconsumer” under the CPSA, the FTCA, and the “statutes preventing unfair competition, deceptiveacts under tort law, and/or the deregulation of trade/trade practices” based on the allegedlyderogatory image of her that is posted on Defendant’s website. (See doc. 3 at 1.) All her claimsagainst Defendant treat it as the publisher of that image. See, e.g., Hinton, 72 F. Supp. 3d at 690(quoting MySpace, 528 F.3d at 418) (“[T]he Court finds that all of the Plaintiff’s claims againsteBay arise or ‘stem[ ] from the [ ] publication of information [on www.ebay.com] created by thirdparties….’”); Klayman, 753 F.3d at 1359 (“[I]ndeed, the very essence of publishing is making thedecision whether to print or retract a given piece of content—the very actions for which Klaymanseeks to hold Facebook liable.”). Accordingly, the third and final element is satisfied>>.

(notizia e link alla sentenza dal blog di Eric Goldman)

Banca dati informativa ed esenzione ex § 230 CDA

Una banca dati, che ospita informazioni di vario tipo sulle persone , disponibili a pagamento per i terzi, può fruire del § 230 CDA per il caso di informazioni inaccurate (violazioni penali insussistenti /errate) e in violazione del Fair Credit Reporting Act (FCRA)?

Si tratta del sito publicdata.com.

Il problema si pone perchè raccoglie ed eroga  informazioni fornitele da enti terzi: o  meglio le commercia, dato che le acquista e poi rivende (soprattutto a datori di lavoro), magari strutturate in report di comoda lettura, p. 2.

La risposta è positiva, secondo la district court dell’eastern district della Virginia , 19.05.2021, caso n. 3:20-cv-294-HEH, Henderson c. The Source of Publica DAta. : un simile commerciante di informazioni può ripararsi dietro lo scudo del safe harbour.  Anche se c’è un suo ruolo nel selezionare ed organizzare le informazioni messe on line.

Si v. il § III.B.(ove si legge che si tratta di caso nuovo nella giurisprudenza USA, p. 7).

In dettaglio, il § 230 CDA si applica a questo tipo di violazioni (pp. 8-10).

E ricorrono i requisiti posti dalla disposizione predetta:

  1. si tratta di internet service provider, p. 11/12
  2. non è content provider, perchè le info provengono da terzi, p. 13. La mera selezione non rileva, dato che il filtraggio rientra nelle attività dell’access provider (§ 230.f.4.A)
  3. la pretesa azionata tratta il convenuto come creatore di contenuti (cioè ne fa valere la responsabilità editoriale, p. 13-14)

Il punto delicato  è il secondo , relativo al ruolo svolto nella organizzazione dei materiali. Ma effettivametne la disposizione citata delinea un concetto molto ampio di <access provider>.

(Purtroppo la sentenza  reperita  è un pdf solamente grafico)

Ancora sul (pericoloso) “speed filter” di Snap e il safe harbour ex § 230 CDA

Altra lite sulle conseguenze mortali dell’uso del software <speed filter> di Snap Inc., che permette di eseguire e condividere filmati dal veicolo con riproduzione della velocità oraria raggiunta.

I genitori di ragazzi morti in incidente automobilistico, causato anche dall’incoraggiamento di Snap a correre forte, citano Snap INc. (poi: S.) per progettazione difettosa del prodotto (responsabilità da prodotto difettoso)

Decide la corte di appello californiana del 9° cicuito, 4 maggio 2021, Lemon ed altri c. Snap Inc. No. 20-55295,  D.C. No. 2:19-cv-04504-MWF-KS.

In primo grado era stata accolta l’eccezione di safe harbour ex § 230 CDA; in  appello viene respinta, per cui viene riformata la decisione di primo grado.

In breve, per la Corte la domanda dei genitori dei ragazzi era basata su design negligente del prodotto e dunque nulla aveva a che fare con il ruolo di publisher/speaker chiesto dal § 230 CDA.

Va anche tenuto conto che <<many of Snapchat’s users suspect, if not actually “believe,” that Snapchat will reward them for “recording a 100-MPH or faster [s]nap” using the Speed Filter. According to plaintiffs, “[t]his is a game for Snap and many of its users” with the goal being to reach 100 MPH, take a photo or video with the Speed Filter, “and then share the 100-MPH-Snap on Snapchat.”>>, p. 7.

Dunque <<Snapchat allegedly knew or should have known, before May 28, 2017, that its users believed that such a reward system existed and that the Speed Filter was therefore incentivizing young drivers to drive at dangerous speeds. Indeed, the Parents allege that there had been: a series of news articles about this phenomenon; an online petition that “called on Snapchat to address its role in encouraging dangerous speeding”; at least three accidents linked to Snapchat users’ pursuit of high-speed snaps; and at least one other lawsuit against Snap based on these practices. While Snapchat warned its users against using the Speed Filter while driving, these warnings allegedly proved ineffective. And, despite all this, “Snap did not remove or restrict access to Snapchat while traveling at dangerous speeds or otherwise properly address the danger it created.”>>, ivi.

<<Here, the Parents seek to hold Snap liable for its allegedly “unreasonable and negligent” design decisions regarding Snapchat. They allege that Snap created: (1) Snapchat; (2) Snapchat’s Speed Filter; and (3) an incentive system within Snapchat that encouraged its users to pursue certain unknown achievements and rewards. The Speed Filter and the incentive system then supposedly worked in tandem to entice young Snapchat users to drive at speeds exceeding 100 MPH.
The Parents thus allege a cause of action for negligent design—a common products liability tort>>, p. 11.

Non si tratta quindi di causa petendi basata sull’attività di publisher/speaker: <<The duty underlying such a claim differs markedly from the duties of publishers as defined in the CDA. Manufacturers have a specific duty to refrain from designing a product that poses an unreasonable risk of injury or harm to consumers. See Dan B. Dobbs et al., Dobbs’ Law of Torts § 478 (2d ed., June 2020 Update). Meanwhile, entities acting solely as publishers—i.e., those that “review[] material submitted for publication, perhaps edit[] it for style or technical fluency, and then decide[] whether to publish it,” Barnes, 570 F.3d at 1102—generally have no similar duty. See Dobbs’ Law of Torts § 478.
It is thus apparent that the Parents’ amended complaint does not seek to hold Snap liable for its conduct as a publisher or speaker. Their negligent design lawsuit treats Snap as a products manufacturer, accusing it of negligently designing a product (Snapchat) with a defect (the interplay between Snapchat’s reward system and the Speed Filter). Thus, the duty that Snap allegedly violated “springs from” its distinct capacity as a product designer. Barnes, 570 F.3d at 1107. This is further evidenced by the fact that Snap could have satisfied its “alleged obligation”—to take reasonable measures to design a product more useful than it was foreseeably dangerous—without altering the content that Snapchat’s users generate. Internet Brands, 824 F.3d at 851. Snap’s alleged duty in this case thus “has nothing to do with” its editing, monitoring, or removing of the content that its users generate through Snapchat. Id. at 852>>, 12-13.

Tuttavia un hosting di materiali di terzi c’era: quello dei video (snaps) dei tre sfortunati ragazzi (i quali sono terzi rispetto a S.).

Ma ciò non toglie che la causa petendi era di prodotto difettoso: <<Notably, the Parents do not fault Snap in the least for publishing Landen’s snap. Indeed, their amended complaint fully disclaims such a reading of their claim: “The danger is not the Snap [message using the Speed Filter] itself. Obviously, no one is harmed by the post. Rather, the danger is the speeding.” AC ¶ 14. While we need not accept conclusory allegations contained in a complaint, we must nonetheless read the complaint in the light most favorable to the Parents. See Dyroff, 934 F.3d at 1096. And this statement reinforces our own reading of the Parents’ negligent design claim as standing independently of the content that Snapchat’s users create with the Speed Filter.
To sum up, even if Snap is acting as a publisher in releasing Snapchat and its various features to the public, the Parents’ claim still rests on nothing more than Snap’s “own acts.” Roommates, 521 F.3d 1165. The Parents’ claim thus is not predicated on “information provided by another information content provider.” Barnes, 570 F.3d at 1101>>, p. 15.

La decisione pare corretta.

Notizia e link alla sentenza dal blog di Eric Goldman (critico invece sulla sentenza)

Safe harbour ex 230 CDA e piattaforma di intermediazione di servizi “car rental”

L’aeroporto Logan di Boston, Massachusetts , USA, (poi : A.) non ammette servizi di car rental , se non concordati.

La piattaforma Turo (T.) offre un sercvizio di incontro domanda/offerta di car rental: <<Turo describes itself as “an online platform that operates a peer-to-peer marketplace connecting [hosts] with [guests] seeking cars on a short-term basis.” Turo has no office, rental counter, or other physical presence at Logan Airport. A guest seeking to rent a motor vehicle from a host would search Turo’s website or available listings, select and book a particular vehicle, and then coordinate the pick-up location and time with the host. Turo does not require its hosts to deliver vehicles to their guests, nor does Turo determine the parties’ particular rendezvous location>>,  p. 4.

A. sanziona T. per aver violato il divieto di prestare servizi di noleggio auto se non su accordo (tentato da A. , ma rifiutato da T.).

Allora lo cita in giudizio per l’inibitoria del servizio e risarcimentoi danni. Ovviamente T. eccepisce il safe harbour ex 230 CDA.

La Suprema Corte del Massachusetts con decisione 21.04.2021, MASSACHUSETTS PORT AUTHORITY vs. TURO INC. & others,  conferma che non gli spetta. Essenzialmente perchè non è mero hosting di dati di terzi, ma “facilitatore”: <<The judge determined, and we agree, that Turo’s immunity claims fail as to the second prong because Massport’s claims against Turo regard the portion of the content on Turo’s website advertising Logan Airport as a desirable pick-up or drop-off location, which was created by Turo itself.>>, p. 11

Le informaizoni fornite da  T., <<encouraging the use of Logan Airport as a desirable pick-up or drop-off location for its users is exactly the content Massport asserts is the basis for the claim of aiding and abetting. Cf. Federal Trade Comm’n v. Accusearch, Inc., 570 F.3d 1187, 1199 (10th Cir. 2009) (information service provider liable for “development of offensive content only if it in some way specifically encourages development of what is offensive about the content”). Because this specific content was created by Turo, it cannot be construed reasonably as “information provided by another,” Backpage.com, 817 F.3d at 19, and Turo is not protected by § 230’s shield of immunity on the basis of this prong.      As to the third prong, the judge ruled that immunity under § 230 is not available to Turo because, rather than seeking to hold Turo liable as the publisher or speaker for its users’ content, Massport’s claims sought to hold Turo liable for its own role in facilitating the online car rental transactions that resulted in its customers’ continuing trespass. The record supports the judge’s conclusion.>>, p. 12.

La Corte cita poi un precedente del 2019 di corte distrettuale del suo stato , coinvolgente Airbnb

Nel caso de quo, dice la SC, <<as in the Airbnb case, the record reflects that Turo serves a dual role as both the publisher of its users’ third-party listings and the facilitator of the rental transactions themselves, and in particular the rental transactions that occur on Massport’s Logan Airport property. Rather than focusing on what Turo allows its hosts to publish in their listings, Massport’s claims pointedly focus on Turo’s role as the facilitator of the ensuing rental transactions at Logan Airport, which is far more than just offering a website to serve as a go-between among those seeking to rent their vehicles and those seeking rental vehicles>> p. 14.

La fattispecie concreta si avvicina a quella europea «The Pirate Bay» decisa da Corte giustizia  14.06.2017, C‑610/15 (pur se a proposito della comunicazione al pubblico in  diritto di autore).

(notizia e link alla sentenza dal blog di Eric Goldman)

Ancora su safe harbour ex § 230 CDA e Twitter

Una modella (M.) si accorge di alcune sua foto intime pubblicate su Twitter (T.) da un soggetto editoriale (E.) di quel settore.

Chiede pertanto a T. la rimozione delle foto, dei tweet e la sospensione dell’account.

T. l’accontenta solo sul primo punto.

Allora M. agisce verso T. e E. , azionando:  <<(1) copyright infringement; (2) a violation of FOSTA-SESTA, 18 U.S.C. 1598 (named for the Allow States and Victims to Fight Online Sex Trafficking Act and Stop Online Sex Trafficking Act bills); (3) a violation of the right of publicity under Cal. Civ. Code § 3344; (4) false advertising under the Lanham Act; (5) false light invasion of privacy; (6) defamation, a violation under Cal. Civ. Code § 44, et seq.; (7) fraud in violation of California’s Unfair Competition Law, Cal. Bus. & Prof. Code § 1 17200 et seq.; (8) negligent and intentional infliction of emotional distress; and (9) unjust enrichment>>

Decide  US D.C. Central District della California 19.02.2021 , caso CV 20-10434-GW-JEMx,  Morton c. Twitter+1.

Manco a dirlo T eccepisce l’esimente ex § 230 CDA per tutti i claims tranne quello di copyright.

E’  sempre problematico il requisito del se l’attore tratti o meno il convenuto come publisher o speaker: conta la sostanza, non il nome adoperato dall’attore. Cioè la domanda va qualfiicata dal giudice, p. 5.

M. cerca di dire che E. non è terzo ma affiliato a T. . La corte rigetta, anche se di fatto senza motivazione, pp.-5-6. Anche perchè sarebbe stato più appropriato ragionarci sul requisito del se sia materiale di soggetto “terzo”, non sul se sia trattato come publisher.

IL punto più interessante è la copertura col § 230 della domanda contrattuale, 7 ss.

M. sostiene di no: ma invano perchè la corte rigetta per safe harbour e per due ragioni, p. 7/8:

Primo perchè M. non ha indicato una clausola contrattuale  che obbligasse T. a sospendere gli account offensivi: la clausola c’è, ma è merely aspirational, non binding.

Secondo , perchè la richiesta di sospendere l’account implica decisione editoriale, per cui opera l’esimente: <<“But removing content is something publishers do, and to impose liability on the basis of such conduct necessarily involves treating the liable party as a publisher of the content it failed to remove.” Barnes, 570 F.3d at 1103 (holding that Section 230 barred a negligent-undertaking claim because “the duty that Barnes claims Yahoo violated derives from Yahoo’s conduct as a publisher – the steps it allegedly took, but later supposedly abandoned, to de-publish the offensive profiles”)>>, p .8.

E’ il punto teoricamente più  interessante: la condotta censurata costituisce al tempo stesso sia  (in-)adempimento contrattuale che decisione editoriale. Le due qualificazione si sovrappongono.

Invece la lesione dell’affidamento  (promissory estoppel) non è preclusa dall’esimente, per cui solo qui M. ottiene ragione: <<This is because liability for promissory estoppel is not necessarily for behavior that is identical to publishing or speaking (e.g., publishing defamatory material in the form of SpyIRL’s tweets or failing to remove those tweets and suspend the account). “[P]romising . . . is not synonymous with the performance of the action promised. . . . one can, and often does, promise to do something without actually doing it at the same time.” Barnes, 570 F.3d at 1107. On this theory, “contract liability would come not from [Twitter]’s publishing conduct, but from [Twitter]’s manifest intention to be legally obligated to do something, which happens to be removal of material from publication.” Id. That manifested intention “generates a legal duty distinct from the conduct at hand, be it the conduct of a publisher, of a doctor, or of an overzealous uncle.” Id>>

(sentenze e link dal blog di Eric Goldman)