Chi ritweetta un post lesivo è coperto dal safe harbour ex § 230 CDA? Pare di si

La Corte Suprema del New Hampshire, opinion 11.05.2022, Hillsborough-northern judicial district No. 2020-0496 , Banaian c. Bascom et aa., affronta il tema e risponde positivamente.

In una scuola situata a nord di Boston, uno studente aveva hackerato il sito della scuola e aveva inserito post offensivi, suggerenti che una docente fosse  “sexually pe[r]verted and desirous of seeking sexual liaisons with Merrimack Valley students and their parents.”

Altro studente tweetta il post e altri poi ritweettano (“ritwittano”, secondo Treccani) il primo tweet.

La docente agisce verso i retweeters , i quali però eccepiscono il safe harbour ex § 230.c)  CDA.  Disposizione che così recita:

<<c) Protection for “Good Samaritan” blocking and screening of offensive material.

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.>>.

La questione giuridica è se nel concetto di user rientrino gli alunni del caso  sub iudice.

La SC conferma che è così. Del resto sarebbe assai difficile ragionare diversamente.

Precisamente: << We are persuaded by the reasoning set forth in these cases. The plaintiff identifies no case law that supports a contrary result. Rather, the plaintiff argues that because the text of the statute is ambiguous, the title of section 230(c) — “Protection for ‘Good Samaritan’ blocking and screening of offensive material” — should be used to resolve the ambiguity. We disagree, however, that the term “user” in the text of section 230 is ambiguous. See Webster’s Third New International Dictionary 2524 (unabridged ed. 2002) (defining “user” to mean “one that uses”); American Heritage Dictionary of the English Language 1908 (5th ed. 2011) (defining “user” to mean “[o]ne who uses a computer, computer program, or online service”). “[H]eadings and titles are not meant to take the place of the detailed provisions of the text”; hence, “the wise rule that the title of a statute and the heading of a section cannot limit the plain meaning of the text.” Brotherhood of R.R. Trainmen v. Baltimore & O.R. Co., 331 U.S. 519, 528-29 (1947). Likewise, to the extent the plaintiff asserts that the legislative history of section 230 compels the conclusion that Congress did not intend “users” to refer to individual users, we do not consider legislative history to construe a statute which is clear on its face. See Adkins v. Silverman, 899 F.3d 395, 403 (5th Cir. 2018) (explaining that “where a statute’s text is clear, courts should not resort to legislative history”).

Despite the plaintiff’s assertion to the contrary, we conclude that it is evident that section 230 of the CDA abrogates the common law of defamation as applied to individual users. The CDA provides that “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3). We agree with the trial court that the statute’s plain language confers immunity from suit upon users and that “Congress chose to immunize all users who repost[] the content of others.” That individual users are immunized from claims of defamation for retweeting content that they did not create is evident from the statutory language. See Zeran v. America Online, Inc., 129 F.3d 327, 334 (4th Cir. 1997) (explaining that the language of section 230 makes “plain that Congress’ desire to promote unfettered speech on the Internet must supersede conflicting common law causes of action”).
We hold that the retweeter defendants are “user[s] of an interactive computer service” under section 230(c)(1) of the CDA, and thus the plaintiff’s claims against them are barred. See 47 U.S.C. § 230(e)(3). Accordingly, we  uphold the trial court’s granting of the motions to dismiss because the factspled in the plaintiff’s complaint do not constitute a basis for legal relief.
>>

(notizia della e link alla sentenza dal blog del prof. Eric Goldman)

Ancora sul diritto di parola vs. Twitter : non c’è violazione del Primo emendamento poichè non è State Actor (sul caso Trump c. Twitter)

Altra decisione nella lite Trump e altri c. Twitter (Distr. Nord della California , 6 maggio 2022, case 3:21-cv-08378-JD ) prodotta dalla nota censura  operata da Tw. contro il primo.

Anche qui va male all’ex presidente: Tw. no è State ACtor in alcun modo e dunque egli non può appellarsi al diritto di parola del Primo Emendamento.

Notare l’inziale understatement del collegio: <<Plaintiffs are not starting from a position of strength. Twitter is a private company, and “the First Amendment applies only to governmental abridgements of speech, and not to alleged abridgements by private companies>>.

<<Plaintiffs’ only hope of stating a First Amendment claim is to plausibly allege that Twitter was in effect operating as the government under the “state-action doctrine.” This doctrine provides that, in some situations, “governmental authority may dominate an activity to such an extent that its participants must be deemed to act with the authority of the government and, as a result, be subject to constitutional constraints>>.

<< The salient question under the state action doctrine is whether “the conduct allegedly causing the deprivation of a federal right” is “fairly attributable to the State.” >>

Si pensi che, circa la prova della state action nel caso specifico ,  <<in plaintiffs’ view, these account actions were the result of coercion by members of Congress affiliated with the Democratic Party>>!!

E’ pure rigettata la domadna di esame della costituzionalità del § 230 CDA perchè manca la injury richiesta allo scopo

Il blocco dell’account Twitter per post ingannevoli o fuorvianti (misleading) è coperto dal safe harbour ex § 230 CDA

Il distretto nord della California con provv. 29.04.2022, No. C 21-09818 WHA, Berenson v. Twitter, decide la domanda giudiziale allegante un illegittimo blocco dell’account per post fuorvianti (misleading) dopo la nuova Twitter policy five-strike in tema di covid 19.

E la rigetta, riconoscendo il safe harbour ex § 230.c.2.a del CDA.

A nulla valgono le allegazioni attoree intorno alla mancanza di buona fede in Twitter: << With the exception of the claims for breach of contract and promissory estoppel, all claims in this action are barred by 47 U.S.C. Section 230(c)(2)(A), which provides, “No provider or user of an interactive computer service shall be held liable on account of — any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” For an internet platform like Twitter, Section 230 precludes liability for removing content and preventing content from being posted that the platform finds would cause its users harm, such as misinformation regarding COVID-19. Plaintiff’s allegations regarding the leadup to his account suspension do not provide a sufficient factual underpinning for his conclusion Twitter lacked good faith. Twitter constructed a robust five-strike COVID-19 misinformation policy and, even if it applied those strikes in error, that alone would not show bad faith. Rather, the allegations are consistent with Twitter’s good faith effort to respond to clearly objectionable content posted by users on its platform. See Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1105 (9th Cir. 2009); Domen v. Vimeo, Inc., 433 F. Supp. 3d 592, 604 (S.D.N.Y. 2020) (Judge Stewart D. Aaron)>>.

Invece non  rientrano nella citata esimente (quindi la causa prosegue su quelle) le domande basate su violazione contrattuale e promissory estoppel.

La domanda basata sulla vioalzione del diritto di parola è pure respinta per il solito motivo della mancanza di state action, essendo Tw. un  ente privato: <<Aside from Section 230, plaintiff fails to even state a First Amendment claim. The free speech clause only prohibits government abridgement of speech — plaintiff concedes Twitter is a private company (Compl. ¶15). Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019). Twitter’s actions here, moreover, do not constitute state action under the joint action test because the combination of (1) the shift in Twitter’s enforcement position, and (2) general cajoling from various federal officials regarding misinformation on social media platforms do not plausibly assert Twitter conspired or was otherwise a willful participant in government action. See Heineke v. Santa Clara Univ., 965 F.3d 1009, 1014 (9th Cir. 2020).  For the same reasons, plaintiff has not alleged state action under the governmental nexus test either, which is generally subsumed by the joint action test. Naoko Ohno v. Yuko Yasuma, 723 F.3d 984, 995 n.13 (9th Cir. 2013). Twitter “may be a paradigmatic public square on the Internet, but it is not transformed into a state actor solely by providing a forum for speech.” Prager Univ. v. Google LLC, 951 F.3d 991, 997 (9th Cir. 2020) (cleaned up, quotation omitted). >>

(notizia e link alla sentenza dal blog del prof. Eric goldman)

Ritwittare aggiungendo commenti diffamatori non è protetto dal safe harbour ex 230 CDA

Byrne è citato per diffamazione da Us Dominion (azienda usa che fornisce software per la gestione dei processi elettorali) per dichiaraizoni e tweet offensivi.

Egli cerca l’esimente del safe harbour ex 230 CDA ma gli va male: è infatti content provider.

Il mero twittare un link (a materiale diffamatorio) pootrebbe esserne coperto: ma non i commenti accompagnatori.

Così il Trib. del District of Columbia 20.04.4022, Case 1:21-cv-02131-CJN, US Dominion v. Byrne: <<A so-called “information content provider” does not enjoy immunity under § 230.   Klayman v. Zuckerberg, 753 F.3d 1354, 1356 (D.C. Cir. 2014). Any “person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service” qualifies as an “information content provider.” 47 U.S.C. § 230(f)(3); Bennett, 882 F.3d at 1166 (noting a dividing line between service and content in that ‘interactive computer service’ providers—which are generally eligible for CDA section 230 immunity—and ‘information content provider[s],’ which are not entitled to immunity”).
While § 230 may provide immunity for someone who merely shares a link on Twitter,
Roca Labs, Inc. v. Consumer Opinion Corp., 140 F. Supp. 3d 1311, 1321 (M.D. Fla. 2015), it does not immunize someone for making additional remarks that are allegedly defamatory, see La Liberte v. Reid, 966 F.3d 79, 89 (2d Cir. 2020). Here, Byrne stated that he “vouch[ed] for” the evidence proving that Dominion had a connection to China. See
Compl. ¶ 153(m). Byrne’s alleged statements accompanying the retweet therefore fall outside the ambit of § 230 immunity>>.

Questione non difficile: che il mero caricamente di un link sia protetto, è questione interessante; che invece i commenti accompagnatori ingiuriosi rendano l’autore un content provider, è certo.

La violazione contrattuale è coperta da safe harbour editoriale ex § 230 CDA?

La questione è sfiorata dalla Appellate Division di New York 22.03.2022, 2022 NY Slip Op 01978, Word of God Fellowship, Inc. v Vimeo, Inc., ove l’attore agisce c. Vimeo dopo aver subito la rimozione di video perchè confusori sulla sicurezza vaccinale.

L’importante domanda trova a mio parere risposta negativa: la piattaforma non può invocare il safe harbour se viola una regola contrattuale che si era assunta liberamente.

Diverso è se, come nel caso de quo, il contratto di hosting preveda la facoltà di rimuovere: ma allora il diritto di rimozione ha base nel contratto e non nell’esimente da safe harbour

(notizia della sentenza e link dal blog del prof. Eric Goldman)

Obblighi di avvertenza per prodotti venduti sul suo marketplace anche per lo stesso Amazon (che nemmeno può invocare il safe harbour ex 230 CDA)

Secondo una legge californiana del 1986 (c.d. <<Proposition 65>>), No person in thecourse of doing business shall knowingly and intentionally expose any
individual to a chemical known to the state to cause cancer or reproductive
toxicity without first giving clear and reasonable warning to such individual,
except as provided in Section 25249.10.”.

Il punto è : Amazon rientra nel concetto di <person in the couirse of doing business> gravato dal voere informativo cit.? Si, secondo la corte di appello della California, 1st app. district, Lee c. Amazon, A158275 in un caso concernente crema richiarante la pelle  mna contenente mercurio in eccesso.

Amazon siostiene naturalmente di non car parte della catena distrivbutiva, ma la difesa è respinta: <<The trial court was clearly correct to reject Amazon’s claim to beoutside the chain of distribution. Proposition 65 imposes the duty to provide
 warnings on any “person in the course of doing business,” which
unquestionably includes Amazon’s activities here. As the trial court
explained, “there is no language in section 25249.l l(f) [‘definitions’ for
Proposition 65] or the new regulations expressly limiting the duty to provide
a Proposition 65 warning only to a ‘manufacturer, producer, packager,
importer, supplier, or distributor of a product,’ or a ‘retail seller’ (under more
limited circumstances described in C.C.R. § 25600.2(e)), or limiting the broad language in the operative statute imposing the warning requirement on any
‘person in the course of doing business’ who ‘knowingly and intentionally
expose[s] any individual’ to a listed chemical. (Health & Saf. Code § 25249.6.)
The phrase ‘person in the course of doing business’ is broadly worded and not
limited to parties in the chain of distribution of a product or whose status is
defined in the regulations. (See Health & Saf. Code, § 25249.11(b).)” Amazon
manages and oversees all aspects of third-party sales on its Web site,
including accepting payment and providing refunds to customers on sellers’
behalf, providing the only channel for communication between customers and
sellers, earning fees from sellers for each completed sale and, for sellers
utilizing the FBA program, storing the products and arranging for their
delivery to customers. There can be no question Amazon was, in the words of
one court, “pivotal in bringing the product here to the consumer.” (Bolger v.
Amazon.com (2020) 53 Cal.App.5th 431, 438 (Bolger).>>, pp. 36-37.

Inolktre Amazon noin ha titolo per invoare l’esimente del § 230 CDA per gli editori (in senso opposto il giudioce di priomo grado).

La questione non è semplice.

L’attore Lee afferma che <<Amazon violated Proposition 65 exposing consumers to mercury without warnings through its own conduct. The claims do not
attempt to hold Amazon responsible for thirdparty sellers’ content (except in
the sense that Amazon would have been able to disclaim responsibility for
providing warnings if the sellers had provided them). As we have discussed,
the claims do not require Amazon to modify or remove thirdparty content
but rather to provide a warning where Amazon’s own conduct makes it
subject to Health and Safety Code section 25249.6>>, p. 76

E poi: << Contrary to Amazon’s characterization, enforcing its obligations under
Proposition 65 does not require it to “monitor, review, and revise” product
listings. As both Lee and the Attorney General point out, the “knowingly and
intentionally” requirement in Health and Safety Code section 25249.6 means
Amazon is required to provide a warning where it has knowledge a product
contains a listed chemical—for example, from public health alerts or direct
notice. We recognize that any responsibility to provide warnings Amazon
might have under section 25249.6 would not result in liability if the third-
party seller of a skin-lightening product its If a skin-lightening cream is sold in a brick-and-mortar drug store thatwas aware the product contained mercury, there is no question that retailseller would have some obligation to provide Proposition 65 warnings—depending, of course, on whether entities further up the distribution chainhad provided warnings for the products and, if not, could be held to account.Nothing in the text or purposes of the CDA suggests it should be interpretedto insulate Amazon from responsibilities under Proposition 65 that wouldapply to a brick-and-mortar purveyor of the same product.Not only would such an interpretation give Amazon a competitiveadvantage unintended by Congress in enacting the CDA, but it would beinimical to the purposes of Proposition 65. Amazon makes it possible forsellers who might not be able to place their products in traditional retailstores to reach a vast audience of potential customers. (E.g., Bolger, supra,53 Cal.App.5th at p. 453 [“The Amazon website . . . enables manufacturersand sellers who have little presence in the United States to sell products tocustomers here”].) The evidence in this case indicates that mercury-containing skin-lightening products are overwhelmingly likely to have beenmanufactured outside the United States—unsurprisingly, as FDAregulations prohibit use of mercury as a skin-lightening agent in cosmetics.(21 C.F.R. § 700.13.) This makes it all the more likely Amazon may be theonly business that can readily be compelled to provide a Proposition 65warning for these products. (See 2016 FSOR, supra, p. 55 [discussingimpracticality of enforcing warning requirement against foreign entitywithout agent for service of process in United States]; Bolger, supra,53 Cal.App.5th at p. 453 [noting as first factor supporting application of strictliability doctrine to Amazon that it “may be the only member of thedistribution chain reasonably available to an injured plaintiff who purchasesa product on its website”].) Amazon is thus making available to consumers,and profiting from sales of, products that clearly require Proposition 65warnings, yet are likely to have been manufactured and distributed byentities beyond the reach of reasonable enforcement efforts. InsulatingAmazon from liability for its own Proposition 65 obligations in thesecircumstances would be anomalousto review the product’s packaging and/or listing on the Web site to determine whether a warning was provided by the third-party seller. These facts do not mean Lee’s claims necessarily treats Amazon as a speaker or publisher of information provided by the third-party sellers. If Amazon has actual or constructive knowledge that a product contains mercury, it might choose to review the product listing to determinewhether the third-party seller had provided a Proposition 65 warning before providing the warning itself or removing the listing. But nothing inherently requires Amazon to do so. It could choose, instead, to act on its knowledge by providing the warning regardless, pursuant to its own obligations under Proposition 65>>

Tale legge << “ ‘is a remedial law, designed to protect
the public’ ” which must be construed “ ‘broadly to accomplish that protective
purpose.’ ” (Center for Self-Improvement & Community Development v.
Lennar Corp., supra, 173 Cal.App.4th at pp. 1550–1551, quoting People ex rel.
Lungren v. Superior Court, supra, 14 Cal.4th at p. 314.) Moreover, states’
“police powers to protect the health and safety of their citizens . . . are
‘primarily, and historically, . . . matter[s] of local concern.’ ” (Medtronic, Inc.79
v. Lohr (1996) 518 U.S. 470, 485.) The United States Supreme Court has
explained that “[w]hen addressing questions of express or implied pre-
emption, we begin our analysis ‘with the assumption that the historic police
powers of the States [are] not to be superseded by the Federal Act unless that
was the clear and manifest purpose of Congress.’ [Citation].” (Altria Group,
Inc. v. Good (2008) 555 U.S. 70, 77.) The “strong presumption against
displacement of state law . . . applies not only to the existence, but also to the
extent, of federal preemption. [Citation.] Because of it, ‘courts should
narrowly interpret the scope of Congress’s “intended invalidation of state
law” whenever possible.’ [Citation].” (Brown v. Mortensen (2011) 51 Cal.4th
1052, 1064.)
As the Ninth Circuit has explained, Congress intended “to preserve the
free-flowing nature of Internet speech and commerce without unduly
prejudicing the enforcement of other important state and federal laws. When
Congress passed section 230 it didn’t intend to prevent the enforcement of all
laws online; rather, it sought to encourage interactive computer services that
provide users neutral tools to post content online to police that content
without fear that through their ‘good samaritan . . . screening of offensive
material,’ [citation], they would become liable for every single message posted
by third parties on their website.” (Roommates.com, supra, 521 F.3d at
p. 1175, quoting § 230(c).)
The text of section 230(e)(3) is clear that state laws inconsistent with
section 230 are preempted while those consistent with section 230 are not
preempted. Proposition 65’s warning requirement is an exercise of state
authority to protect the public that imposes obligations on any individual who
exposes another to a listed chemical. Proposition 65 is not inconsistent with
the CDA because imposing liability on Amazon for failing to comply with its own, independent obligations under Proposition 65, does not require treating Amazon as the publisher or speaker of third-party sellers’ content.

Accordingly, if Lee can establish all the elements of a violation of Proposition
65, section 230 does not immunize Amazon from liability>>, pp. 79-80

(notizia della sentenza e link dal blog del prof. Eric Goldman)

Copiare da un forum ad un altro threads, contenenti post diffamatori e soggetti a copyright, non preclude il safe harbour e costituisce fair use

Copiare post (anzi interi threads) da un forum ad un altro (in occasione ed anzi a causa di cambio di policy nel 2017) non impedisce la fruzione del safe harbour ex 230 CDA in caso di post diffamatori ; inoltre, quanto al copyright , costituisce fair use.

così il l’appello del primo circuito conferma il primo grado con sentenza 10.03.2022, caso n. 21-1146, Monsarrat v. Newman.

Quanto al § 230 CDA, il giudizio è esatto.

La prima piattaforma era LiveJournals, controllata dalla Russia; quella destinataria del trasferimento (operato da un moderatore) è Dreamwidth.

(sentenza a link alla stessa dal blog del profl Eric Goldman)

La diffamazione, per avere pubblicato su Facebook le email aggressive ricevute, non è coperto da safe harbour ex 230 CDA

La diffamazione per aver pubblicato su Facebbok le email aggressive/offensive ricevute non è coperto sal safe harbour ex 230 CDA:  essenzialmente perchè non si tratta di materiali  di terzi che questi volevano pubblicare in internet ma di sceltga del destinatario delle email.

Questa la decisione dell’Eastern district of California, 3 marzo 2022, Crowley ed altri c. Faison ed altri, Case 2:21-cv-00778-MCE-JDP .

Si tratta della pubblicazione da parte della responsabile locale in Sacramento del movimnto Black Lives Matter delle email che  aveva ricevuto.

Passo pertinente: <<Defendants nonetheless ignore certain key distinctions that make their reliance on the Act problematic.

Immunity under § 230 requires that the third-party provider, herethe individual masquerading as Karra Crowley, have “provided” the emails to Defendants“for use on the Internet or another interactive computer service.” Batzel, 333 F.3d at1033 (emphasis in original).

Here, as Plaintiffs point out, the emails were sent directly to BLM Sacramento’s general email address. “[I]f the imposter intended for his/her emailsto be posted on BLM Sacramento’s Facebook page, the imposter could have posted theemail content directly to the Facebook page,” yet did not do so. Pls.’ Opp to Mot. toStrike, 18:9-11 (emphasis in original). Those circumstances raise a legitimate questionas to whether the imposter indeed intended to post on the internet, and without a findingto that effect the Act’s immunity does not apply. These concerns are further amplified by the fact that Karra Crowley notifiedDefendants that she did not author the emails, and they did not come from her emailaddress within 24 hours after the last email attributed to her was posted. Defendantsnonetheless refused to take down the offending posts from its Facebook page, causingthe hateful and threatening messages received by Plaintiffs to continue.

As set forthabove, one of the most disgusting of those messages, in which the sender graphicallydescribed how he or she was going to kill Karra Crowley and her daughter, was sentnearly a month later.In addition, while the Act does provide immunity for materials posted on theinternet which the publisher had no role in creating, here Defendants did not simply postthe emails. They went on to suggest that Karra Crowley “needs to be famous” andrepresented that her “information has been verified”, including business and homeaddresses. Compl., ¶¶ 13-14.6 It is those representations that Plaintiffs claim arelibelous, particularly after Defendants persisted in allowing the postings to remain evenafter they had been denounced as false, a decision which caused further harassmentand threats to be directed towards Plaintiffs.

As the California Supreme Court noted inBarrrett, Plaintiffs remain “free under section 230 to pursue the originator of a defamatory Internet publication.” 40 Cal. 4th at 6>>

Visto il dettato della norma, difficile dar torto al giudice californiano.

Si noti che ad invocare il safe harbour non è una piattaforma digitale, come capita di solito, ma un suo utilizzatore: cosa perfettamente legittima, però, visto il dettato normativo.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Safe harbour ex 230 CDA per l’omesso avviso e l’omessa rimozione di materiale sensibile? Si.

La madre di un bambino le cui immagini sessualmente significative avevva notato caricate su Tikl Tok cita la piattaforma per i segg. illeciti: did not put any warning on any of the videos claiming they might contain sensitive material; did not remove any of the videos from its platform; did not report the videos to any child abuse hotline; did not sanction, prevent, or discourage the videos in any way from being viewed, shared, downloaded or disbursed in any other way; and “failed to act on their own policies and procedures along with State and Federal Statutes and Regulations.

Il distretto nord dell’Illinois, west. division, 28.02.2022, Case No. 21 C 50129, Day c. Tik Tok, accoglie l’eccezione di safe harbour ex § 230 CDA sollevata dalla piattaforma (e citando il noto precedente Craiglist del 2008):

What § 230(c)(1) says is that an online information system must not ‘be treated as the publisher or speaker of any information provided by’ someone else.” Chicago Lawyers’ Committee for Civil Rights Under Law, Inc. v. Craigslist, Inc., 519 F.3d 666, 671 (7th Cir. 2008).
In Chicago Lawyers’, plaintiff sought to hold Craigslist liable for postings made by others on its platform that violated the anti-discrimination in advertising provision of the Fair Housing Act (42 U.S.C. § 3604(c)). The court held 47 U.S.C. § 230(c)(1) precluded Craigslist from being  liable for the offending postings because “[i]t is not the author of the ads and could not be treated as the ‘speaker’ of the posters’ words, given § 230(c)(1).” Id. The court rejected plaintiff’s argument that Craigslist could be liable as one who caused the offending post to be made stating “[a]n interactive computer service ‘causes’ postings only in the sense of providing a place where people can post.” Id. “Nothing in the service craigslist offers induces anyone to post any particular listing or express a preference for discrimination.” Id. “If craigslist ‘causes’ the discriminatory notices, then, so do phone companies and courier services (and, for that matter, the firms that make the computers and software that owners use to post their notices online), yet no one could think that Microsoft and Dell are liable for ‘causing’ discriminatory advertisements.” Id. at 672. The court concluded the opinion by stating that plaintiff could use the postings on Craigslist to identify targets to investigate and “assemble a list of names to send to the Attorney General for prosecution. But given § 230(c)(1) it cannot sue the messenger just because the message reveals a third party’s plan to engage in unlawful discrimination.”

Ed allora la domanda attorea nel caso specifico < does not allege defendant created or posted the videos. It only alleges defendant allowed and did not timely remove the videos posted by someone else. This is clearly a complaint about “information provided by another information content provider” for which defendant cannot be held liable by the terms of Section 230(c)(1).>

Difficile dar torto alla corte, alla luce del dettato della disposizione invocata da TikTok

(notizia e link alla sentenza dal blog del prof. Eric Goldman)