Browsing: Technology

In a filing to the Court on Wednesday, Texas argued that its law, HB 20, which prohibits large social media firms from blocking, banning or demoting posts or accounts, does not violate the First Amendment. It contrasts with claims by opponents, including the tech industry, that the legislation infringes on the constitutional rights of tech platforms to make editorial decisions and to be free from government-compelled speech. The case is viewed as a bellwether for social media and could determine whether tech platforms may have to scale back their content moderation and allow a broad range of material that their terms currently prohibit. Justice Samuel Alito is currently considering whether to grant an emergency stay of a lower court decision that had allowed the law to take effect last week. The law is being challenged by advocacy groups representing the tech industry.In court papers, the advocacy groups call the law “an unprecedented assault on the editorial discretion of private websites.” They warn it would “compel platforms to disseminate all sorts of objectionable viewpoints — such as Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders.” In response on Wednesday, Texas Attorney General Ken Paxton argued that HB 20 does not infringe on tech platforms’ speech rights as the state law instead seeks to regulate the companies’ conduct with regard to their users. Even if the law did raise First Amendment concerns, he argued, those concerns are adequately addressed by the fact that HB 20 seeks to define social media companies as “common carriers” akin to phone companies and railroads. The case has already drawn “friend of the court” briefs from interested third parties including groups such as the Anti-Defamation League and the Texas State Conference of the NAACP, who urged the court to block the law, arguing it will “transform social media platforms into online repositories of vile, graphic, harmful, hateful, and fraudulent content, of no utility to the individuals who currently engage in those communities.”Also seeking to file a third-party brief was former Rep. Chris Cox, co-author of the tech platform liability shield known as Section 230 of the Communications Decency Act, a federal law that explicitly permits websites to moderate content and which has become a lightning rod in the wider battle over digital speech. Social media operators have repeatedly cited Section 230 to successfully nip many suits in the bud concerning user-generated content. But HB 20 conflicts with Section 230 by saying platforms can be sued in Texas for moderating their online communities, raising questions about the future of the federal law that’s been described as “the 26 words that created the internet.”On Saturday, Alito gave Texas a deadline of Wednesday evening to file its response to the stay request. He may either make a unilateral decision on the stay, or refer the decision to the full Court.

The probe, disclosed Wednesday by James’ office, focuses on the livestreaming platform Twitch, the messaging service Discord and the websites 4chan and 8chan (now known as 8kun). Other unnamed companies could also be drawn into the investigation, James said. The investigation is expected to focus on companies that “the Buffalo shooter used to plan, promote, and stream his terror attack,” James announced in a tweet. News of the investigation further heightens scrutiny surrounding tech platforms and their handling of the Buffalo shooting suspect’s racist, violent online content, including a 180-page document that has been attributed to the suspect.Prior to opening fire at a supermarket in a predominantly Black area in Buffalo, the suspect appears to have hinted at his plans on 4chan and to have created a private chat room on Discord. The suspect also attempted to livestream the shooting on Twitch, which was removed in less than two minutes but continued to spread on other large platforms. Discord told CNN in a statement it plans to cooperate with the probe. Amazon-owned Twitch and 4chan and 8chan didn’t immediately respond to requests for comment. Results of the inquiry will be sent to New York Gov. Kathy Hochul, who directed James to begin the investigation, Hochul said Wednesday. “These social media platforms have to take responsibility,” Hochul said. “They must be more vigilant in monitoring the content and they must be held accountable for favoring engagement over public safety.” In a letter to James dated Wednesday, Hochul called for the investigation to determine “whether specific companies have civil or criminal liability for their role in promoting, facilitating, or providing a platform to plan and promote violence.” Multiple experts on the First Amendment and platform liability have said it would not have been illegal for the Buffalo shooting suspect to livestream his video online. Section 230 of the Communications Decency Act, along with the First Amendment, also shield social media and tech platforms from liability for most user-generated content, though a Texas state law currently before the Supreme Court purports to restrict how platforms can moderate content.

The complaint, filed privately Wednesday by the state’s Division of Human Rights, concerns Amazon’s alleged violation of state law by giving its worksite managers the ability to override reasonable accommodations recommendations by in-house Amazon officials — known as Accommodation Consultants — charged with reviewing such requests.In one of several cases noted in the complaint, as outlined by Hochul’s office, New York officials alleged a pregnant Amazon worker was forced into taking unpaid leave after being injured on the job lifting packages weighing more than 25 pounds despite having an approved accommodation exempting the worker from heavy lifting. Hochul’s office said the complaint “seeks a decision requiring Amazon to cease its discriminatory conduct, adopt non-discriminatory policies and practices regarding the review of requests for reasonable accommodations, train its employees on the provisions of the Human Rights Law, and pay civil fines and penalties to the State of New York.”Amazon (AMZN) didn’t immediately respond to a request for comment. As the country’s second largest private employer, Amazon is known for its ultra-efficient warehouses that rely, in part, on closely tracking worker productivity and has faced scrutiny for high turnover rates and on-the-job injuries in recent years. The pandemic, during which Amazon’s workers were deemed essential, only exacerbated concerns over workplace conditions.But the company has been met with growing pushback from some workers and regulators during the pandemic, including in New York. Amazon has faced unionization efforts at two warehouse facilities in New York City, which recently garnered enough support to hold elections. Workers at the larger facility, known as JFK8, successfully voted in favor of doing so, becoming the first Amazon workers to formally unionize in the United States. Amazon is challenging the results.Earlier this month, a New York appellate court dismissed a lawsuit brought by state Attorney General Letitia James concerning the company’s pandemic response. Morgan Rubin, first deputy press secretary for James’ office, said in a statement to CNN Business last week: “As our office reviews the decision and our options moving forward, Attorney General James remains committed to protecting Amazon workers, and all workers, from unfair treatment.”Amazon had previously disputed the claims in James’ lawsuit. “We care deeply about the health and safety of our employees,” the company said in a prior statement. “We don’t believe the Attorney General’s filing presents an accurate picture of Amazon’s industry-leading response to the pandemic.”Meanwhile, following shareholder pressure, Amazon disclosed in a filing last month that it would conduct an audit to “evaluate any disparate racial impacts” of its policies on its US hourly employees.

Hong Kong
CNN Business

China is trying once again to lift the spirits of its huge tech industry after a bruising regulatory offensive that has weakened some of its biggest businesses at a time of stalling economic growth.

In a rare public display of support for the private sector, Vice Premier Liu He said Tuesday that the government would “properly manage” the relationship between the government and the market, and back tech companies to list in both domestic and foreign markets. Liu is a top economic adviser to President Xi Jinping.

He was speaking at a symposium with other officials and Chinese tech executives, including Robin Li, the CEO of internet search giant Baidu

(BIDU), William Ding, CEO of gaming and content company NetEase

(NTES), and Zhou Hongyi, CEO of internet security firm Qihoo 360 Technologies.

Chinese stocks on Wall Street surged after Liu’s comments, but mostly declined Wednesday in Hong Kong. This suggests that the market is still deeply concerned about the growth prospects of China’s big internet companies, and are looking for more specific commitments from the government.

Those concerns were reinforced Wednesday when Tencent

(TCEHY) reported zero revenue growth in the first quarter, a worse outcome than expected.

Beijing’s year-long regulatory crackdown has left deep scars on the huge tech sector. Coupled with a weakening economy, the campaign has wiped out more than $1 trillion off the market value of Chinese companies. Many tech firms have reported dismal earnings or cut tens of thousands of jobs to reduce operating costs.

The Chinese economy is likely to contract in the second quarter, as Covid lockdowns wreak havoc on activity. Consumer spending and factory output both shrank sharply last month, while unemployment surged to the highest level since the initial coronavirus outbreak in early 2020.

Looking at the fine print

Liu’s comments were welcomed by tech executives at the symposium.

Zhou from Qihoo 360 said on Weibo that he felt “confidence and support” from the meeting. “At this moment, confidence and support are more precious than gold,” he said.

The Nasdaq Golden Dragon China Index, a key index tracking Chinese companies listed on Wall Street, surged by more than 5% overnight following Liu’s comments. Alibaba

(BABA) soared more than 6% on the New York Stock Exchange. Baidu jumped 4.8%.

The broader US market also closed higher on Tuesday. The Dow Jones Industrial Average closed up 1.3%. The S&P 500 rose 2%, and the Nasdaq Composite gained 2.8%.

“While the [symposium] did not include much new context in our view, we do believe the meeting suggests another positive regulatory signal towards the platform economy and supportive attitude of internet companies seeking listing in overseas markets,” said Citi analysts on Wednesday.

But the lack of detail from Liu weighed on Asian markets on Wednesday.

The Hang Seng Tech Index, a key index for Chinese tech firms listed in Hong Kong, dropped as much as 2.3% on Wednesday. It was last down 0.3%. The benchmark Hang Seng Index closed up 0.2% after choppy trading.


(BABA) lost 0.6%. Tencent

(TCEHY) dropped 0.8%. Kuaishou, a rival to TikTok in China, fell 2.5%.

The “Chinese government appears to be running out of policy tools to support growth,” said Ken Cheung, chief Asian FX strategist at Mizuho Bank.

The escalating downside risks for growth might have prompted the leadership to end the tech crackdown quickly, Cheung said. But it may take more time to repair investors’ confidence, he added.

Recent earnings show how much China’s tech industry continues to struggle.

Online retail giant

(JD) on Monday posted its slowest quarterly revenue growth since it went public in 2014.

Earlier this year, Alibaba and e-commerce firm Pinduoduo reported their slowest sales growth as public companies for the December quarter.

In a message to staffers obtained by CNN Business, Apple’s pandemic response team cited Covid-19 conditions as cause for delaying the plan, which was scheduled to take effect next week. The team indicated the update is for employees in Santa Clara Valley where Apple is headquartered.”We’ll make changes to other locations as required,” the message said. “We’re continuing to monitor local data closely and are committed to providing at least two weeks notice of any changes.”Apple declined to comment on the message. The news was first reported by Bloomberg News.The move is the latest example of how large tech companies, which were among the first to shift to remote work in the early days of the pandemic, have faced challenges in bringing their employees back on a regular basis to the perk-filled campuses which they spent billions to build.Apple’s hybrid return-to-work plan has been a source of tension among some employees since it was first announced last summer. After delays due to surges in Covid-19 cases in the fall and winter months, Apple started requiring most workers to be in the office at least once per week in April before upping the number to twice a week later that month.In anticipation of the last phase of the pilot, which would have workers in the office on Mondays, Tuesdays and Thursdays, a group of employees organizing under a newly-formed group known as Apple Together petitioned leadership publicly for more flexibility. Apple Together, which was formed to advocate for workers’ well-being and rights, called out a perceived disconnect between the company’s external marketing to customers — that its products allow people to “work from anywhere” — and its internal messaging to staffers.”We are not asking for everyone to be forced to work from home,” the group wrote. “We are asking to decide for ourselves, together with our teams and direct manager, what kind of work arrangement works best for each one of us, be that in an office, work from home, or a hybrid approach.”The petition has been signed by 1,445 current and former Apple employees to date. The company has not commented on its existence.For now, Apple’s Bay Area employees will continue working in the office two days per week, with some added safety protocols. Apple said in the message to workers that it is temporarily asking people to wear masks when in common areas of the office such as meeting rooms, hallways and elevators.Apple also told workers that if they are “uncomfortable coming into the office during this time, you have the option to work remotely. Please discuss your plans with your manager.”

Major social media platforms have tried to improve how they respond to the sharing of this kind of content since the mass shooting in Christchurch, New Zealand, in 2019, which was streamed live on Facebook. In the 24 hours after that attack, Facebook said it removed 1.5 million copies of the video. Experts in online extremism say such content can act as far-right terrorist propaganda and inspire others to carry out similar attacks; the Buffalo shooter was directly influenced by the Christchurch attack, according to the document he allegedly shared. The stakes for addressing the spread of such content quickly are significant. “This fits into a model that we’ve seen over and over and over again,” said Ben Decker, CEO of digital investigations consultancy Memetica and an expert on online radicalization and extremism. “At this point we know that the consumption of these videos creates copycat mass shootings.” Still, social media companies face challenges in responding to what appears to be users posting a deluge of copies of the Buffalo shooting video and document. The response by Big Tech Saturday’s attack was streamed live on Twitch, a video streaming service owned by Amazon (AMZN) that is particularly popular with gamers. Twitch said it removed the video two minutes after the violence started, before it could be widely viewed but not before it was downloaded by other users. The video has since been shared hundreds of thousands of times across major social media platforms and also posted to more obscure video hosting sites.Spokespeople for Facebook, Twitter (TWTR), YouTube and Reddit all told CNN that they have banned sharing the video on their sites and are working to identify and remove copies of it. (TikTok did not respond to requests for comment on its response.) But the companies appear to be struggling to contain the spread and manage users looking for loopholes in their content moderation practices. CNN observed a link to a copy of the video circulating on Facebook on Sunday night. Facebook included a warning that the link violated its community standards but still allowed users to click through and watch the video. Facebook parent company Meta (FB) said it had removed the link after CNN asked about it.Meta on Saturday designated the event as a “terrorist attack,” which triggered the company’s internal teams to identify and remove the account of the suspect, as well as to begin removing copies of the video and document and links to them on other sites, according to a company spokesperson. The company added the video and document to an internal database that helps automatically detect and remove copies if they are reuploaded. Meta has also banned content that praises or supports the attacker, the spokesperson said. The video was also hosted on a lesser known video service called Streamable and was only removed after it had reportedly been viewed more than 3 million times, and its link shared across Facebook and Twitter, according to The New York Times. A spokesperson for Streamable told CNN the company was “working diligently” to remove copies of the video “expeditiously.” The spokesperson did not respond when asked how one video had reached millions of views before it was removed. Copies of the document allegedly written by the shooter were uploaded to Google Drive and other, smaller online storage sites and shared over the weekend via links to those platforms. Google did not respond to requests for comment about the use of Drive to spread the document. Challenges for addressing extremist content In some cases, the big platforms appeared to struggle with common moderation pitfalls, such as removing English-language uploads of the video faster than those in other languages, according to Tim Squirrell, communications head at the Institute for Strategic Dialogue, a think tank dedicated to addressing extremism. But the mainstream Big Tech platforms also must grapple with the fact that not all internet platforms want to take action against such content.In 2017, Facebook, Microsoft (MSFT), YouTube and Twitter founded the Global Internet Forum to Counter Terrorism, an organization designed to help promote collaboration in preventing terrorists and violent extremists from exploiting their platforms that has since grown to include more than a dozen companies. Following the Christchurch attack in 2019, the group committed to prevent the livestreaming of attacks on their platforms and to coordinate to address violent and extremist content. “Now, technically, that failed. It was on Twitch. It then started getting posted around in the initial 24 hours,” Decker said, adding that the platforms have more work to do in effectively coordinating to remove harmful content during crisis situations. Still, the work done by the major platforms since Christchurch meant that their response to Saturday’s attack was faster and more robust than the reaction three years ago. But elsewhere on the internet, less moderated platforms such as 4chan and Telegram provided a place where users could congregate and coordinate to repeatedly re-upload the video and document. “Many of the threads on 4chan’s message board were just people demanding the stream over and over again, and once they got a seven-minute version, just reposting it over and over again” to bigger platforms, Squirrell said. As with other content on the internet, videos like the one of Saturday’s shooting are also often quickly manipulated by online extremist communities and incorporated into memes and other content that can be harder for mainstream platforms to identify and remove. Like Facebook, YouTube and Twitter, platforms like 4chan and Telegram rely on user generated content, and are legally protected (at least in the United States) by a law called Section 230 from liability over much of what users post. But whereas the mainstream Big Tech platforms are incentivized by advertisers, social pressures and users to address harmful content, the smaller, more fringe platforms are not motivated by a desire to protect ad revenue or attract a broad base of users. In some cases, they desire to be online homes for speech that would be moderated elsewhere.”The consequence of that is that you can never complete the game of whack-a-mole,” Squirrell said. “There’s always going to be somewhere, someone circulating a Google Drive link or a Samsung cloud link or something else that allows people to access this … Once it’s out in the ether, it’s impossible to take everything down.”

In a series of tweets on Monday, Agrawal laid out Twitter’s approach to spam accounts and its challenges dealing with them.Twitter (TWTR) suspends “over half a million spam accounts every day,” Agrawal wrote. He reiterated a longstanding statistic from Twitter that less than 5% of its daily active users are spam accounts — a stat that Musk cited on Friday while announcing that his $44 billion deal to buy Twitter was “temporarily on hold.”Agrawal said that estimate is based on “multiple human reviews … of thousands of accounts” sampled at random, but knowing externally which accounts are counted on any given day is not possible. Twitter has previously acknowledged that while it believes its estimates to be “reasonable,” the measurements were not independently verified and the actual number of fake or spam accounts could be higher.Agrawal’s initial 13 tweets were met with a reply from Musk that was reflective of the unusual and extremely online nature of the deal: a poop emoji. Musk followed up with a somewhat more thoughtful question. “So how do advertisers know what they’re getting for their money?” Musk asked “This is fundamental to the financial health of Twitter,” he added.Musk has repeatedly spoken out against bots and spam accounts on Twitter, once describing cryptocurrency spam bots as the platform’s “single most annoying problem.” Anyone familiar with the replies to Musk’s tweets knows they are full of such scams, many of which attempt to leverage Musk’s name.But some analysts speculate that the world’s richest man may be using the debate over bots to drive down the price at which he would have to buy Twitter, whether as a standard negotiating tactic or out of necessity. Twitter’s stock price has erased all its gains in the weeks since Musk disclosed his stake the company and is currently trading at $37.39 per share — well below Musk’s offer price of $54.20 per share.”The bot issue at the end of the day … feels more to us like the “dog ate the homework” excuse to bail on the Twitter deal or talk down a lower price,” Dan Ives and John Katsingris, analysts at Wedbush Securities, wrote in a note on Monday.Musk appeared to add fuel to that speculation on Monday, reportedly saying that a deal to buy Twitter at a lower price wouldn’t be “out of the question,” while also throwing out his own estimate that at least 20% of all Twitter accounts are fake, according to Bloomberg. Musk didn’t say how he came to this number and did not respond to a request for comment from CNN Business.In his Twitter thread, Agrawal said most spam campaigns on Twitter use a combination of humans and automation, rather than being primarily led by bots. Parsing through legitimate and fake accounts can be complicated, he said.”The hard challenge is that many accounts which look fake superficially — are actually real people,” he said. “And some of the spam accounts which are actually the most dangerous — and cause the most harm to our users — can look totally legitimate on the surface.”Agrawal said Twitter had been in touch with Musk on the spam issue.”We shared an overview of the estimation process with Elon a week ago and look forward to continuing the conversation with him, and all of you,” he added.

Imagine if Saturday’s livestreamed video of the attack in Buffalo were legally required to remain on social media. Imagine Facebook, Twitter and YouTube were forced to allow those gruesome images, or posts amplifying the suspect’s racist ideologies, in between wedding photos and your aunt’s tuna casserole recipe, with no way to block it. Imagine videos of murder and hateful speech being burned into your brain because the law requires platforms to host all content that isn’t strictly illegal. A Texas law has made all of that an open possibility. The law in question, HB 20, restricts tech platforms’ ability to moderate user-generated content. In the name of free speech, HB 20 prohibits social media companies from blocking, banning or demoting user posts or accounts, and enables Texans to sue the platforms if they believe they’ve been silenced. After a federal appeals court allowed the law to take effect last week, the Supreme Court is now poised to rule on whether the law violates tech platforms’ First Amendment rights. The online broadcast of Saturday’s horrific murder spree only emphasizes the enormous stakes underlying the looming Supreme Court decision. And it puts into stark perspective the policy battle that’s playing out at the state, national and global levels about how — or whether — social media companies should moderate their platforms. After HB 20 went into effect last week, it raised a host of questions about how social media will function in Texas going forward. Could tech platforms have to offer Texas-specific versions of their sites? Will some platforms stop providing services in Texas altogether? What could social media content in Texas actually look like, without content moderation? The answers are still unclear. What seemed like a hypothetical on Wednesday suddenly became painfully real on Saturday as social media firms scrambled to respond to the shooting, which was initially livestreamed on the video platform Twitch. Although Twitch said it removed the livestream within two minutes, that didn’t prevent the video from being copied and shared on other platforms. Social media companies including Facebook-parent Meta, Twitter, YouTube and Reddit have banned the video from their sites and are working to remove copies of it. But under the Texas law, taking those steps could expose the tech companies to costly litigation. The shooting offers a horrifying example of the dilemma and the challenges facing tech platforms in Texas and potentially across the nation if the Supreme Court backs the state’s content moderation law. A ruling siding with Texas would also likely bolster Florida lawmakers, who, driven by their stated belief tech companies discriminate politically against conservatives, have passed similar legislation that is also tied up in the courts. And it would give a roadmap to other states wishing to erect moderation bans, as well. It amounts to a confused regulatory environment that has some state governments moving to require lax moderation while others, such as European policymakers, appear poised to impose tighter moderation standards. New uncertainties for tech platforms under Texas law If the Texas law is upheld, social media firms will face greater restrictions on how they moderate content. As HB 20 is written, platforms would become liable in the state for taking steps to “block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression.” The law is so new that there haven’t been any suits filed yet over acts of alleged censorship. According to Evelyn Douek, a platform moderation expert at Columbia University’s Knight First Amendment Institute, platforms could try to remove something like the Buffalo video under HB 20 and justify it on the grounds they are not censoring expression, just removing content. “It’s not obvious to me that the Texas social media law would require platforms to carry the Buffalo shooting video,” Douek said. But Jeff Kosseff, a law professor and platform moderation expert at the US Naval Academy, said the law is still ambiguous enough to create enormous uncertainty for social media companies. The platforms, he said, would likely face immense legal pressure not to remove graphically violent content, including material like the Buffalo video, because plaintiffs could still claim that removing the videos is itself a stifling of viewpoints under HB 20. Even the threat of such suits could be a disincentive to moderation. “Even if you just remove the video and you say, ‘This video violates our policy,’ you’re still going to open the door to claims it was removed because it was posted by someone who has a particular viewpoint on things,” Kosseff told CNN. Whether tech platforms are sued for removing the video or sued for removing a user’s viewpoint surrounding the video, the result would be the same — a law that effectively floods digital spaces with violent content, according to Steve Vladeck, law professor at the University of Texas and a CNN legal analyst. “There’s no question that the Buffalo shooting video drives home both the stakes of the HB 20 dispute and what’s wrong with HB 20 itself,” Vladeck told CNN. “If any Texas-based account reposted or rebroadcast the Twitch stream, taking that down would, on my reading, clearly violate HB 20. When you deprive social media platforms of the ability to moderate content, you are all but guaranteeing that they will be awash in violent, inappropriate, and otherwise objectionable posts.” Beyond the graphic video itself, the Buffalo shooting also implicates the spread of hateful speech online like the kind found in the shooting suspect’s 180-page document, such as racist conspiracy theories. This type of content would clearly be required to stay up under HB 20, legal experts agreed, because it expresses a clear viewpoint. “My biggest concern is really restricting the platforms’ ability to remove these theories that drive violence,” Kosseff said. A push to rethink content moderation Beyond what HB 20 requires of tech platforms, the Buffalo video also raises questions surrounding some voluntary proposals to relax content moderation standards, such as what billionaire Elon Musk has in mind for Twitter. Musk is currently seeking to purchase Twitter in a $44 billion deal, saying he intends to bring more free speech to the platform by easing Twitter’s enforcement of content rules. How might a Musk-owned Twitter handle the Buffalo video? It’s not immediately clear. Twitter declined to comment and Musk didn’t immediately respond to a request for comment. Musk has said that under his ownership Twitter would be much more reluctant to remove content or to permanently ban individuals — and that he believes Twitter should allow all legal speech in the jurisdictions it operates. He has softened his stance in recent weeks, however, acknowledging at a Financial Times conference that he would likely place some limits on speech. “If there are tweets that are wrong and bad, those should be either deleted or made invisible, and a suspension, a temporary suspension, is appropriate,” Musk said. He added: “If they say something that is illegal or otherwise just destructive to the world … perhaps a timeout, a temporary suspension, or that particular tweet should be made invisible or have very limited traction.” Musk has not said whether Twitter should consider something like the Buffalo video to be either “wrong and bad” or “destructive to the world.” Should he conclude that it is, then his stance on the video could also risk running afoul of HB 20. The result would be a clash between two entities — Musk and the Texas government — that ostensibly share the same goal of allowing more content that social media platforms, at least today, widely agree is objectionable. The immediate reaction by major social platforms to remove the Buffalo video reflects an established consensus about how to handle livestreamed videos of violence, one informed by years of painful experience. But rather than affirming that consensus, recent developments could now fracture and muddy it, with important ramifications for all social media users.

The hate-filled forum 4chan, where all users post anonymously, appears to be at the center of the made-for-the-internet massacre that took place in a Buffalo supermarket on Saturday — from discussion on the platform apparently helping inspire the alleged attacker to spreading the gruesome video of the shooting. A 180-page document that has been attributed to the man suspected of the shooting, in which 10 people were killed, references how he was influenced by what he saw on 4chan, including how he was inspired by watching a video of the 2019 mass shootings in Christchurch, New Zealand — which were also streamed live. Ben Decker, the CEO of Memetica, a threat analysis company, told CNN, “this is a step-by-step copycat attack of Christchurch, in both the real-world attack; planning and selecting the target, and online; coordinating the livestream and manifesto dissemination across fringe message boards.” 4chan, which was created in 2003, claims it receives 22 million unique visitors a month, half of whom it says are in the United States. While the site hosts forums on a variety of topics — including video games, memes and anime — and says it has rules against racism, its lax approach to content moderation means that hate speech not allowed by more mainstream platforms spreads more freely on 4chan.4chan is part of the internet’s Wild West. While Big Tech platforms like Facebook and Twitter at least try to police their sites, almost anything goes on 4Chan. Some parts of its forums are almost exclusively dedicated to the sharing of racist and antisemitic memes and tropes. A similar site, 8kun — which was originally called 8chan, and was spun out of 4chan when that forum banned the movement known as Gamergate — has been linked to other atrocities. Immediately following the Buffalo shooting, some users on 4chan did not discuss the horrific loss of human life but instead shared methods for reuploading the shooting video so it could be seen by more people. Twitch, the Amazon-owned service on which the shooter had livestreamed part of the attack, said it removed the video for violating its policies two minutes after the violence in the video began. The actual live stream itself had only been seen by a small number of people, perhaps as few as 20 or so, according to screenshots that have circulated of the stream. 4channers who had apparently screen-recorded the live stream discussed tactics for re-uploading the video to other sites, and services that could be used to hide their identity as they did so. By Sunday, copies of the video were circulating across the internet. Some of those copies were reportedly viewed millions of times. Platforms like Facebook and Twitter banned the sharing of the video on their sites, but the companies were clearly struggling Sunday to contain its spread. We don’t have statistics for the Buffalo video yet, but in the 24 hours after the Christchurch shooting, Facebook said it removed 1.5 million copies of the shooter’s video. The preservation and sharing of these videos by far-right communities on 4chan and other fringe message boards can help inspire further bloodshed, according to Decker, as evidenced by what the Buffalo suspect wrote in his alleged document. CNN has reached out to 4chan for comment.

Most stock quote data provided by BATS. Market indices are shown in real time, except for the DJIA, which is delayed by two minutes. All times are ET. Disclaimer. Morningstar: Copyright 2018 Morningstar, Inc. All Rights Reserved. Factset: FactSet Research Systems Inc.2018. All rights reserved. Chicago Mercantile Association: Certain market data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Dow Jones: The Dow Jones branded indices are proprietary to and are calculated, distributed and marketed by DJI Opco, a subsidiary of S&P Dow Jones Indices LLC and have been licensed for use to S&P Opco, LLC and CNN. Standard & Poor’s and S&P are registered trademarks of Standard & Poor’s Financial Services LLC and Dow Jones is a registered trademark of Dow Jones Trademark Holdings LLC. All content of the Dow Jones branded indices Copyright S&P Dow Jones Indices LLC 2018 and/or its affiliates.©