La sospensione dell’account Twitter è coperto dal safe habour ex § 230 CDA (con una notazione per il diritto UE)

Distr. Court of california 23 agosto 2023, Case No. 23-cv-00980-JSC., Zhang v. Twitter, rigetta la domanda dell’utente Twitter per presenza del safe harbor.

Regola ormai pacifica tanto che viene da cheidersi come possa uin avvocato consugliuare la lite (nel caso però Zhang aveva agito “representing himself”)

Qui segnalo solo la (fugace) illustazione del motivo per cui T. non è il fornitore delle informaizonie  e quindi ricorre il requisito di legge

<<Second, Plaintiff seeks to hold Twitter liable for decisions regarding “information provided by another information content provider”—that is, information he and the third-party user, rather than Twitter, provided. Plaintiff’s argument Twitter is itself “an information content provider” of the third-party account holder’s content within the meaning of Section 230(f)(3) is misplaced. (Dkt. No. 53 at 21-22.) Section 230(f)(3) defines “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

Plaintiff appears to argue Twitter’s placement of information in “social media feeds” renders it an information content provider.

Not so. “[P]roliferation and dissemination of content does not equal creation or development of content.” Kimzey v. Yelp! Inc., 836 F.3d 1263, 1271 (9th Cir. 2016); see also Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1174 (9th Cir. 2008) (finding Section 230 immunity applies where the interactive computer service provider “is not responsible, in whole or in part, for the development of th[e] content, which comes entirely from subscribers and is passively displayed by [the interactive computer service provider].”)>>.

Si veda la corrispondente disposizione del digital services act, art. 6 reg. ue 2022/2065, e le tante sentenze  emesse in Italia ex art. 16 e 17 d. lgs 70/2003.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Superare il safe harbour ex § 230 CDA di FAcebook allegando che il suo algoritmo ha contribuito a raicalizzare l’assassino

Il prof. Eric Goldman ricorda una sentenza del Distretto Sud California-Charleston 24 luglio 2023 che rigetta per safe harbour una domanda di danni verso Meta proposta dai parenti di una vittima dell’eccidio compiuto da Dylan Roof nel 2015 alla chiesa di Charleston.

Purtroppo non c’è link al testo ma c’è quello alla citazione introttiva. Nella quale è ben argomentata la ragione del superamento della posizione passiva di FAcebook.

Può essere utile anche da noi ove però superare la specificità della prevedibilità da parte della piattaforma non è facile (ma come colpa con previsione forse si)

Discriminazione algoritmica da parte del marketplace di Facebook e safe harbour ex § 230 CDA

Il prof. Eric Goldman segnala l’appello del 9 circuito 20.06.2023, No. 21-16499, Vargas ed altri c. Facebook , in un caso di allegata discriminazione nel proporre offerte commerciali sul suo marketplace –

La domanda: <<The operative complaint alleges that Facebook’s “targeting methods provide tools to exclude women of color, single parents, persons with disabilities and other protected attributes,” so that Plaintiffs were “prevented from having the same opportunity to view ads for housing” that Facebook users who are not in a protected class received>>.

Ebbene, il safe harbour non si applica perchè Facebook non è estraneo ma coautore della condotta illecita, in quanto cretore dell’algoritmo utilizzato nella pratica discriminatoria:

<<2. The district court also erred by holding that Facebook is immune from liability pursuant to 47 U.S.C. § 230(c)(1). “Immunity from liability exists for ‘(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a [federal or] state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.’” Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1097 (9th Cir. 2019) (quoting Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1100 (9th Cir. 2009)). We agree with Plaintiffs that, taking the allegations in the complaint as true, Plaintiffs’ claims challenge Facebook’s conduct as a co-developer of content and not merely as a publisher of information provided by another information content provider.
Facebook created an Ad Platform that advertisers could use to target advertisements to categories of users. Facebook selected the categories, such as sex, number of children, and location. Facebook then determined which categories applied to each user. For example, Facebook knew that Plaintiff Vargas fell within the categories of single parent, disabled, female, and of Hispanic descent. For some attributes, such as age and gender, Facebook requires users to supply the information. For other attributes, Facebook applies its own algorithms to its vast store of data to determine which categories apply to a particular user.
The Ad Platform allowed advertisers to target specific audiences, both by including categories of persons and by excluding categories of persons, through the use of drop-down menus and toggle buttons. For example, an advertiser could choose to exclude women or persons with children, and an advertiser could draw a boundary around a geographic location and exclude persons falling within that location. Facebook permitted all paid advertisers, including housing advertisers, to use those tools. Housing advertisers allegedly used the tools to exclude protected categories of persons from seeing some advertisements.
As the website’s actions did in Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008) (en banc), Facebook’s own actions “contribute[d] materially to the alleged illegality of the conduct.” Id. at 1168. Facebook created the categories, used its own methodologies to assign users to the categories, and provided simple drop-down menus and toggle buttons to allow housing advertisers to exclude protected categories of persons. Facebook points to three primary aspects of this case that arguably differ from the facts in Roommates.com, but none affects our conclusion that Plaintiffs’ claims challenge Facebook’s own actions>>.

Ed ecco le tre eccezioni di Facebook e relative motivazioni di rigetto del giudice:

<<First, in Roommates.com, the website required users who created profiles to self-identify in several protected categories, such as sex and sexual orientation. Id. at 1161. The facts here are identical with respect to two protected categories because Facebook requires users to specify their gender and age. With respect to other categories, it is true that Facebook does not require users to select directly from a list of options, such as whether they have children. But Facebook uses its own algorithms to categorize the user. Whether by the user’s direct selection or by sophisticated inference, Facebook determines the user’s membership in a wide range of categories, and Facebook permits housing advertisers to exclude persons in those categories. We see little meaningful difference between this case and Roommates.com in this regard. Facebook was “much more than a passive transmitter of information provided by others; it [was] the developer, at least in part, of that information.” Id. at 1166. Indeed, Facebook is more of a developer than the website in Roommates.com in one respect because, even if a user did not intend to reveal a particular characteristic, Facebook’s algorithms nevertheless ascertained that information from the user’s online activities and allowed advertisers to target ads depending on the characteristic.
Second, Facebook emphasizes that its tools do not require an advertiser to discriminate with respect to a protected ground. An advertiser may opt to exclude only unprotected categories of persons or may opt not to exclude any categories of persons. This distinction is, at most, a weak one. The website in Roommates.com likewise did not require advertisers to discriminate, because users could select the option that corresponded to all persons of a particular category, such as “straight or gay.” See, e.g., id. at 1165 (“Subscribers who are seeking housing must make a selection from a drop-down menu, again provided by Roommate[s.com], to indicate whether they are willing to live with ‘Straight or gay’ males, only with ‘Straight’ males, only with ‘Gay’ males or with ‘No males.’”). The manner of discrimination offered by Facebook may be less direct in some respects, but as in Roommates.com, Facebook identified persons in protected categories and offered tools that directly and easily allowed advertisers to exclude all persons of a protected category (or several protected categories).
Finally, Facebook urges us to conclude that the tools at issue here are “neutral” because they are offered to all advertisers, not just housing advertisers, and the use of the tools in some contexts is legal. We agree that the broad availability of the tools distinguishes this case to some extent from the website in Roommates.com, which pertained solely to housing. But we are unpersuaded that the distinction leads to a different ultimate result here. According to the complaint, Facebook promotes the effectiveness of its advertising tools specifically to housing advertisers. “For example, Facebook promotes its Ad Platform with ‘success stories,’ including stories from a housing developer, a real estate agency, a mortgage lender, a real estate-focused marketing agency, and a search tool for rental housing.” A patently discriminatory tool offered specifically and knowingly to housing advertisers does not become “neutral” within the meaning of this doctrine simply because the tool is also offered to others>>.

Non c’è responsabilità di Amazon per la vendita di nitrato di sodio usato poi per suicidio

Interessante pronuncia (in un tragica fattispecie) da parte di West. Dist. di Washington at Seattle 27 giugno 2023, CASE NO. C23-0263JLR, Mccarthy v. Amazon:

<<the Sodium Nitrite was not defective, and that Amazon thus did not owe a duty to warn…the Sodium Nitrite’s warnings were sufficient because the label identified the product’s general dangers and uses, and the dangers of ingesting Sodium Nitrite were both known and obvious. The allegations in the amended complaint establish that Kristine and Ethan deliberately sought out Sodium Nitrite for its fatal properties, intentionally mixed large doses of it with water, and swallowed it to commit suicide….the risk associated with intentionally ingesting a large dose of an industrial grade chemical is also obvious…In this case, the danger was particularly obvious because the Sodium Nitrite “was not marketed as safe for human consumption or ingestion,” and appears to have been categorized as “Business, Industrial, and Scientific Supplies”…
given Kristine and Ethan’s knowledge regarding the dangers of ingesting Sodium Nitrite as well as the general warnings provided on the bottle and the obvious dangers associated with ingesting industrial-grade chemicals, the court concludes that the Sodium Nitrite’s warnings were not defective. Amazon therefore had no duty to provide additional warnings regarding the dangers of ingesting Sodium Nitrite…
even if Amazon owed a duty to provide additional warnings as to the dangers of ingesting sodium nitrite, its failure to do so was not the proximate cause of Kristine and Ethan’s deaths…Kristine and Ethan sought the Sodium Nitrite out for the purpose of committing suicide and intentionally subjected themselves to the Sodium Nitrite’s obvious and known dangerous and those described in the warnings on the label. Plaintiffs do not plausibly allege that better warnings from Amazon would have discouraged Ethan and Kristine from ingesting sodium nitrite>>.

L’aver tolto le recensioni non aiuta gli attori, ai quali viene oppostao con successo il safas harbour ex § 230 CDA, p. 19 ss.

(brano citato tratto dal post del prof. Eric Goldman nel suo blog)

Il 230 CDA salva Amazon dall’accusa di corresponsabile di recensioni diffamatorie contro un venditore del suo marketplace

La recensione diffamatoria (lievemente, per vero: sciarpa Burberry’s asseritamenye non autentica) non può vedere Amazon correposanbilòe perchè oepra il safe harbour citato.

Si tatta di infatti proprio del ruolo di publisher/speaker previsto dala legge. Nè può ravvisarsi un contributo attivo di Amazon  nell’aver stabilito le regole della sua piattaforma, come vorrebbe il diffamato: il noto caso Roommates è malamente invocato.

Caso alquanto facile.

Così l‘appello del 11 circuito 12 giugn 2023, No. 22-11725,  MxCall+1 c. Zotos + Amazon:

<<In that case, Roommates.com published a profile page for each subscriber seeking housing on its website. See id. at 1165. Each profile had drop-down menu on which subscribers seeking housing had to specify whether there are currently straight males, gay males, straight females, or lesbians living at the dwelling. This information was then displayed on the website, and Room-mates.com used this information to channel subscribers away from the listings that were not compatible with the subscriber’s prefer-ences. See id. The Ninth Circuit determined that Roommates.com was an information content provider (along with the subscribers seeking housing on the website) because it helped develop the in-formation at least in part. Id. (“By requiring subscribers to provide the information as a condition of accessing its service, and by providing a limited set of prepopulated answers, Room-mate[s.com] . . . becomes the developer, at least in part, of that in-formation.”).
Roommates.com is not applicable, as the complaint here al-leges that Ms. Zotos wrote the review in its entirety. See generally D.E. 1. Amazon did not create or develop the defamatory review even in part—unlike Roommates.com, which curated the allegedly discriminatory dropdown options and required the subscribers to choose one. There are no allegations that suggest Amazon helped develop the allegedly defamatory review.
The plaintiffs seek to hold Amazon liable for failing to take down Ms. Zotos’ review, which is exactly the kind of claim that is immunized by the CDA—one that treats Amazon as the publisher of that information. See 47 U.S.C. § 230(c)(1). See also D.E. 1 at 5 (“Amazon . . . refused to remove the libelous statements posted by Defendant Zotos”). “Lawsuits seeking to hold a service provider [like Amazon] liable for its exercise of a publisher’s traditional edi-torial functions—such as deciding whether to publish, withdraw, postpone, or alter content—are barred.” Zeran, 129 F.3d at 330. We therefore affirm the dismissal of the claims against Amazon>>.

(notizia e link dal sito del prof. Eric Goldman)

Differenza tra non applicabilità dello safe harbour e affermazione di responsabilità

La corte distrettuale del Wisconsin -western dist.-  31.03.2023, caso No. 21-cv-320-wmc, Hopson + Bluetype c. Google + Does 1 e 2, ha ben chiara la differenza tra i due concetti: che non sia invocabile il  safe harbour non significa che ricorra in positivo responsabilità (anche se di fatto sarà probabile).

Non altrettanto chiara ce l’hanno alcuni nostri opininisti (dottrina e giurisprudenza).

Nel caso si trattava del safe harbour per il copyright in caso di procedura da notice and take down e in particolare da asserita vioalzione della procedura che avrebbe dovuto condurre google a “rimettere su” i materiali in precedenza “tirati giu” (§ 512.g) del DMCA).

<<Here, plaintiffs allege that defendant Google failed to comply with § 512(g)’s
strictures by: (1) redacting contact information from the original takedown notices; (2) failing to restore the disputed content within 10 to 14 business days of receiving plaintiffs’ counter notices; and (3) failing to forward plaintiffs’ counter notices to the senders of the takedown notices. As Google points out, however, its alleged failure to comply with  § 512(g) does not create direct liability for any violation of plaintiffs’ rights. It merely denies Google a safe harbor defense should plaintiffs bring some other claim against the ISP for removing allegedly infringing material, such as a state contract or tort law claim. Martin, 2017 WL 11665339, at *3-4 (§ 512(g) does not create any affirmative cause of action; it
creates a defense to liability); see also Alexander v. Sandoval, 532 U.S. 275, 286-87 (2001) (holding plaintiffs may sue under a federal statue only where there is an express or implied private right of action). So, even if Google did not follow the procedure entitling it to a safe harbor defense in this case, the effect is disqualifying it from that defense, not creating liability under § 512(g) of the DMCA for violating plaintiffs’ rights.>>

Ancora nulla circa tale procedura in UE: gli artt. 16-17 del DSA reg. UE 2022/2065 non ne parlano (pare lasciarla all’autonomia contrattuale) e nemmeno lo fa la dir. specifica sul copyright,  art. 17 dir. UE 790/2019.

(notizia e link dal sito del prof. Eric Goldman)

Lo studente che lascia diffamare i docenti dando le credenziali del social a suoi amici, autori dei post, non è protetto dal safe harbor ex 230 CDA

L’appello del 6° circuito, n° 22-1748, JASON KUTCHINSKI c. FREELAND COMMUNITY SCHOOL DISTRICT; MATTHEW A. CAIRY and TRACI L. SMITH , decide una lite promossa dall’alunno impugnante la sanzione disciplinare irrogatagli per aver dato le credenziali Instagram ad amici , autori di post diffamatori di docenti della scuola.

L’alunno non è infatti qualificabile come publisher o spealker, essendo invece coautore della condotta dannosa:

<<Like the First, Fourth, and Ninth Circuits, we hold that when a student causes, contributes to, or affirmatively participates in harmful speech, the student bears responsibility for the harmful speech. And because H.K. contributed to the harmful speech by creating the Instagram account, granting K.L. and L.F. access to the account, joking with K.L. and L.F. about their posts, and accepting followers, he bears responsibility for the speech related to the Instagram account.
Kutchinski disagrees and makes two arguments. First, Kutchinski argues that Section 230 of the Communications Decency Act, 47 U.S.C. § 230, bars Defendants from disciplining H.K. for the posts made by K.L. and L.F.     This is incorrect. Under § 230(c)(1), “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” To the extent § 230 applies, we do not treat H.K. as the “publisher or speaker” of the posts made by K.L. and L.F. Instead, we have found that H.K. contributed to the harmful speech through his own actions>>.

Che poi aggiunge:

<<Second, Kutchinski argues that disciplining H.K. for the posts emanating from the Instagram account violates H.K.’s First Amendment freedom-of-association rights. “The First Amendment . . . restricts the ability of the State to impose liability on an individual solely because of his association with another.” NAACP v. Claiborne Hardware Co., 458 U.S. 886, 918–19 (1982). “The right to associate does not lose all constitutional protection merely because some members of the group may have participated in conduct or advocated doctrine that itself is not protected.” Id. at 908. But Defendants did not discipline H.K. because he associated with K.L. and L.F. They determined that H.K. jointly participated in the wrongful behavior. Thus, Defendants did not impinge on H.K.’s freedom-of-association rights>>.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Google è protetto dal safe harbour ex 230 CDA poer truffa da parte di un falso inserzionista (falso eBay)

La US distr. court -South. Dis. of NY Case 1:22-cv-06831-JGK, Ynfante v. Google su un caso semplice del safe harbour ex § 230 CDA:

<<In this case, it is plain that Section 230 protects Google from liability in the negligence and false advertising action brought by Mr. Ynfante. First, Google is the provider of an interactive computer service. The Court of Appeals for the Second Circuit has explained that “search engines fall within this definition,” LeadClick Media, 838 F.3d at 174, and Google is one such search engine. See, e.g., Marshall’s Locksmith Serv. Inc. v. Google, LLC, 925 F.3d 1263, 1268 (D.C. Cir. 2019) (holding that the definition of “interactive computer service” applies to Google specifically).
Second, there is no doubt that the complaint treats Google as the publisher or speaker of information. See, e.g., Compl. ¶¶ 27, 34. Section 230 “specifically proscribes liability” for “decisions relating to the monitoring, screening, and deletion of content from [a platform] — actions quintessentially related to a publisher’s role.” Green v. Am. Online (AOL), 318 F.3d 465, 471 (3d Cir. 2003). In other words, Section 230 bars any claim that “can be boiled down to the failure of an interactive computer service to edit or block user-generated content that it believes was tendered for posting online, as that is the very activity Congress sought to immunize by passing the section.” Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1172 n.32 (9th Cir. 2008). In this case, the plaintiff’s causes of action against Google rest solely on the theory that Google did not block a third-party advertisement for publication on its search pages. But for Google’s publication of the advertisement, the plaintiff would not have been harmed. See, e.g., Compl. ¶¶ 38-39, 61. The plaintiff therefore seeks to hold Google liable for its actions related to the screening, monitoring, and posting of content, which fall squarely within the exercise of a publisher’s role and are therefore subject to Section 230’s broad immunity.
Third, the scam advertisement came from an information content provider distinct from the defendant. As the complaint acknowledges, the advertisement was produced by a third party who then submitted the advertisement to Google for publication. See id. ¶ 26. It is therefore plain that the complaint is seeking to hold the defendant liable for information provided by a party other than the defendant and published on Google’s platform, which Section 230 forecloses>>

Niente di nuovo.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

L’embedding non costituisce comunicazione al pubblico però non permette la difesa del safe harbour ex § 512DMCA

Il giudice Barlow della Utah District Court, 2 maggio 2023, caso 2:21-cv-00567-DBB-JCB, decide un’interessante lite sull’embedding.

Attore è il gestore dei diritti su alcune foto eseguite da Annie Leibovitz. Convenuti sono i gestori di un sito che le aveva “riprodotte” con la tecnica dell’embedding (cioè non con riproduzine stabile sul proprio server).

Il giudice applica il c.d server test del noto caso Perfect 10 Inc. v. Google  del 2006 così sintetizzato: <<Perfect 10, the Ninth Circuit addressed whether Google’s unauthorized display of thumbnail and full-sized images violated the copyright holder’s rights. The court first defined an image as a work “that is fixed in a tangible medium of expression . . . when embodied (i.e., stored) in a computer’s server (or hard disk, or other storage device).” The court defined “display” as an individual’s action “to show a copy . . ., either directly or by means of a film, slide, television image, or any other device or process ….”>>.

Quindi rigetta la domanda nel caso dell’embedding sottopostogli :

<<The court finds Trunk Archive’s policy arguments insufficient to put aside the “server” test. Contrary to Trunk Archive’s claims, “practically every court outside the Ninth Circuit” has not “expressed doubt that the use of embedding is a defense to infringement.” Perfect 10 supplies a broad test. The court did not limit its holding to search engines or the specific way that Google utilized inline links. Indeed, Trunk Archive does not elucidate an appreciable difference between embedding technology and inline linking. “While appearances can slightly vary, the technology is still an HTML code directing content outside of a webpage to appear seamlessly on the webpage itself.” The court in Perfect 10 did not find infringement even though Google had integrated full-size images on its search results. Here, CBM Defendants also integrated (embedded) the images onto their website.(…) Besides, embedding redirects a user to the source of the content-in this case, an image hosted by a third-party server. The copyright holder could still seek relief from that server. In no way has the holder “surrender[ed] control over how, when, and by whom their work is subsequently shown.” To guard against infringement, the holder could take down the image or employ restrictions such as paywalls. Similarly, the holder could utilize “metadata tagging or visible digital watermarks to provide better protection.” (…)( In sum, Trunk Archive has not persuaded the court to ignore the “server” test. Without more, the court cannot find that CBM Defendants are barred from asserting the “embedding” defense. The court denies in part Trunk Archive’s motion for partial judgment on the pleadings.>>

Inoltre, viene negato il safe harbour in oggetto, perchè non ricorre il caso del mero storage su server proprio di materiali altrui, previsto ex lege. Infatti l’embedding era stato creato dai convenuti , prendendo i materiali da server altrui: quindi non ricorreva la passività ma l’attività , detto in breve

(notizia e link alla sentenza dal blog del prof Eric Goldman)

Il motore di ricerca è corresponsabile per associazioni indesiderate ma errate in caso di omonimia?

La risposta è negativa nel diritto USA, dato che Microsoft è coperta dal safe harbour ex § 230 CDA:

Così , confermando il 1° grado, la 1st District court of appeal della Florida, Nos. 1D21-3629 + 1D22-1321 (Consolidated for disposition) del 10 maggio 2023, White c. DISCOVERY COMMUNICATIONS, ed altri.

fatto:

Mr. White sued various nonresident defendants for damages in tort resulting from an episode of a reality/crime television show entitled “Evil Lives Here.” Mr. White alleged that beginning with the first broadcast of the episode “I Invited Him In” in August 2018, he was injured by the broadcasting of the episode about a serial killer in New York also named Nathaniel White. According to the allegations in the amended complaint, the defamatory episode used Mr. White’s photograph from a decades-old incarceration by the Florida Department of Corrections. Mr. White alleged that this misuse of his photo during the program gave viewers the impression that he and the New York serial killer with the same name were the same person thereby damaging Mr. White.

Diritto :

The persons who posted the information on the eight URLs provided by Mr. White were the “information content providers” and Microsoft was the “interactive service provider” as defined by 47 U.S.C. § 230(f)(2) and (3). See Marshall’s Locksmith Serv. Inc. v. Google, LLC, 925 F.3d 1263, 1268 (D.C. Cir. 2019) (noting that a search engine falls within the definition of interactive computer service); see also In re Facebook, Inc., 625 S.W. 3d 80, 90 (Tex. 2021) (internal citations omitted) (“The ‘national consensus’ . . . is that ‘all claims’ against internet companies ‘stemming from their publication of information created by third parties’ effectively treat the defendants as publishers and are barred.”). “By presenting Internet search results to users in a relevant manner, Google, Yahoo, and Microsoft facilitate the operations of every website on the internet. The CDA was enacted precisely to prevent these types of interactions from creating civil liability for the Providers.” Baldino’s Lock & Key Serv., Inc. v. Google LLC, 285 F. Supp. 3d 276, 283 (D.D.C. 2018), aff’d sub nom. Marshall’s Locksmith Serv., 925 F.3d at 1265.
In Dowbenko v. Google Inc., 582 Fed. App’x 801, 805 (11th Cir. 2014), the state law defamation claim was “properly dismissed” as “preempted under § 230(c)(1)” since Google, like Microsoft here, merely hosted the content created by other providers through search services. Here, as to Microsoft’s search engine service, the trial court was correct to grant summary judgment finding Microsoft immune from Mr. White’s defamation claim by operation of Section 230 since Microsoft did not publish any defamatory statement.
Mr. White argues that even if Microsoft is immune for any defamation occurring by way of its internet search engine, Microsoft is still liable as a service that streamed the subject episode. Mr. White points to the two letters from Microsoft in support of his argument. For two reasons, we do not reach whether an internet streaming service is an “interactive service provider” immunized from suit for defamation by Section 230.
First, the trial court could not consider the letters in opposition to the motion for summary judgment. The letters were not referenced in Mr. White’s written response to Microsoft’s motion. They were only in the record in response to a different defendant’s motion for a protective order. So the trial court could disregard the letters in ruling on Microsoft’s motion. See Fla. R. Civ. P. 1.510(c)(5); Lloyd S. Meisels, P.A. v. Dobrofsky, 341 So. 3d 1131, 1136 (Fla. 4th DCA 2022). Without the two letters, Mr. White has no argument that Microsoft was a publisher of the episode.
Second, even considering the two letters referenced by Mr. White, they do not show that Microsoft acted as anything but an interactive computer service. That the subject episode was possibly accessible for streaming via a Microsoft search platform does not mean that Microsoft participated in streaming or publishing the episode

(notizia e link alla sentenza dal blog del prof. Eric Goldman)