“Beginning in September 2021, we became subject to government investigations and requests relating to a former employee’s allegations and release of internal company documents concerning, among other things, our algorithms, advertising and user metrics, and content enforcement practices, as well as misinformation and other undesirable activity on our platform, and user well-being,” the company said in its quarterly earnings filing with the Securities and Exchange Commission (SEC).The filing did not specify whether the government “investigations and requests” refer to known inquiries by the US Senate and UK Parliament, or to previously unconfirmed probes from federal agencies in the United States and from governments abroad.Facebook declined to provide specifics on the disclosure. “We are always ready to answer regulators’ questions and will continue to cooperate with government inquiries,” spokesperson Andy Stone said in a statement to CNN Business.The Federal Trade Commission (FTC) is said to have begun “looking into” the whistleblower’s disclosures, the Wall Street Journal reported Wednesday, citing unnamed sources familiar with the matter. In particular, the agency is reportedly looking at whether Facebook might have violated its $5 billion settlement with the agency in 2019 over the company’s data privacy practices. Juliana Gruenwald of the FTC’s Office of Public Affairs declined to comment, adding, “FTC investigations are nonpublic so we generally do not comment on whether we are investigating a particular matter.”Haugen, a former Facebook product manager who left the company in May, provided the documents as evidence to support at least eight complaints to the SEC alleging that Facebook (FB)had misled investors and the public about issues that surfaced internally. She also provided redacted versions of the documents to Congress. The leaked documents also formed the basis of the Wall Street Journal’s “Facebook Files” series and, more recently, a host of reporting by a consortium of news organizations, collectively referred to as the “Facebook Papers.” CNN is a member of the consortium.The SEC did not immediately respond to a request for comment on whether it has launched an investigation based on Haugen’s complaint and disclosures from another, anonymous former Facebook employee-turned-whistleblower that emerged Friday. The documents provide the deepest look yet at many of Facebook’s biggest problems and how they’ve been discussed internally — unprecedented insight into the nearly $890 billion company, whose apps are now used by more than 3.6 billion people worldwide. Facebook has aggressively pushed back on many of Haugen’s claims and much of the reporting around the documents, which the company says mischaracterizes its research and efforts. “Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or wellbeing misunderstands where our own commercial interests lie,” a company spokesperson previously said in a statement. Despite the disclosure — and the wave of critical news coverage of the company — it’s not clear Facebook shareholders are concerned. After Facebook reported Monday that its quarterly profit had grown to more than $9 billion, the company’s stock rose as much as 3% in after-hours trading. Facebook shares are currently down nearly 11% since last month, but that may reflect investor concern about the impact of Apple’s privacy changes on the social media giant’s ad business.
The declines in engagement, visualized in charts on an internal document viewed by CNN Business, looked like wiggly black-and-blue frowns. Facebook decided to make a change — and fast.That December, other internal documents show, the company quickly came up with a plan: it would refocus its News Feed algorithm on a new metric it referred to as “meaningful social interactions”, or MSI, for ranking people’s interactions on Facebook. MSI would assign different point values to things such as “likes” and comments on posts, and even RSVPs to events, and take into consideration the relationships between people, such as that of a person writing a post and a person commenting on it. It launched in early 2018, marking a new era in the ways the social network monitors and manipulates its users, and experiments on their data.The introduction of the MSI metric quickly helped with Facebook’s engagement problem, according to an internal research note from November 6, 2019. Yet, as first reported by The Wall Street Journal last month, the company has also long been aware of the extent to which it fostered negativity online, according to disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Facebook whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of 17 news organizations, including CNN. This ranking system was built and rolled out rapidly: According to an internal note from December 21, 2017, Facebook put the first iteration together within just over a month so it would be done by the end of 2017 for deployment in early 2018. A Facebook spokeswoman told CNN Business that the introduction of MSI wasn’t a “sea change” in how the company ranked users’ activity on the social network, as it previously considered likes, comments, and shares as part of its ranking. The spokeswoman also said that a “significant amount of internal and external research” led to MSI, and that Facebook tested “various versions” of it before launch, as it would for “any changes” to its ranking systems.”When News Feed decided to goal on meaningful interactions in November, the question came up: how can we use rigorous research and science to decide what weight should go on a like vs a comment etc in a little over one month?” the note asked (emphasis the authors’).The answer, according to the note itself, came via surveying tens of thousands of Facebook users and digging through the company’s massive trove of user data. It was a process with the sheen of statistical rigor, but was also one of trial and error, and human judgment, the documents reveal. Move fast and automate thingsIt’s hard to say how Facebook’s use of algorithms that take into consideration actions and interactions on its platforms compares to other social networks, but one thing is clear: For years now, all major social networks — and many other online content services, such as Netflix and YouTube — have been incredibly reliant on algorithms to govern what you see. And as the largest social network of them all, Facebook’s algorithms impact more than a third of the people on the planet. So when Facebook decided to use MSI to inform its algorithms that recommend News Feed content, it was making a change that would affect billions of people. As the note in December 2017 explained, getting to the values MSI used at launch was not a simple task. Facebook ran surveys on over 69,000 people in five countries that are among its largest in terms of monthly active users, asking them about “what feedback they find meaningful to give and to receive.” This let the company determine how people valued interactions with different types of people — such as close friends versus acquaintances — and different types of interaction — such as comments on posts versus shares.Those findings, which included that people found more meaning in whom they interacted with than the type of interaction, “helped validate” and “fine tune” the company’s data science findings, the document said, and helped Facebook adjust how it weights the relationships between people interacting on the social network.The company also used surveys, along with existing knowledge about its users and internal data science experiments, to help understand how to build a scale for ranking interactions. For instance, polling detailed in the note found that many users put a low value on having their posts reshared. That’s because they viewed it as an interaction between the sharer and that person’s friends.The note pointed out that the company did things such as analyzing “a bunch of experiments that give people extra feedback.” In those experiments, some posts were given “a little more likes” and others were given “a little more comments”. The results were used to predict how many additional original posts people would generate considering the number of likes, comments, and reactions they had received in previous posts. This helped Facebook come up with a scale, labeled in the note as the final weight for the metric in the first half of 2018: each “like”, for instance, would be worth 1 point; a reaction emoji or a reshare of a post without adding any text would be worth 5 points; an RSVP for an event would be worth 15 points; and comments, messages or reshares deemed “significant” — defined as having “at least 5+ unique tokens, or a photo or video (in case of shares and messages)” — would be worth 30 points according to the note.The company could then multiply the total by a figure that represented the closeness of the relationship between the people interacting: were they members of the same group, perhaps? If so, multiply by 0.5. Total strangers? Multiply by 0.3. Jenna Burrell, director of research at nonprofit Data & Society, which studies social implications of technologies, told CNN Business that the research Facebook conducted on users in this case appeared to be quite limited, as the document doesn’t mention surveying users on the actual content they post — words, pictures, or videos — or comments they might leave on others’ posts. “What they’re trying to get at is something that’s really hard to reduce to a metric,” she said.Beyond that, the decision to focus on meaningful social interactions was the kind of switch that would require “thousands of different ways of testing it,” according to Ethan Zuckerman, an associate professor at the University of Massachusetts at Amherst who studies how media can be used to enact social changes. “Because you’re really just rewiring the whole network. So you have to be phenomenally careful and phenomenally thoughtful about how you do it,” he said.Another complicating factor is that while Facebook is an international social network, people don’t use Facebook in the same ways in every country. In Myanmar, for instance, which in 2018 become well-known for the deadly impact of hate speech spread via Facebook, Zuckerman said Facebook is seen as more of a news service than a personal network.”The notion that Facebook is your friends and your personal relationships — that’s just not true for some of the world,” he said.Before the launch of the MSI scale, the Facebook spokeswoman said, Facebook conducted three separate tests of it, using various strengths of the scale, on a subset of users. This sort of test, followed by any tweaks that are needed, is standard practice, the spokesperson said, ahead of the introduction of any ranking change.”Beneficial for most metrics”Facebook introduced MSI publicly on January 11, 2018, as a way to prioritize posts from friends, family members, and groups. In an interview with CNN at the time, Adam Mosseri, who was then a VP at Facebook and today heads Facebook-owned Instagram, described the move to push meaningful social interactions as a “rebalancing” of how Facebook’s algorithms rank items in the main feed. “We think that we’re currently slightly overvaluing how much time people spend on our platform and undervaluing how many meaningful interactions they have with other people,” said Mosseri, who at the time oversaw News Feed. Indeed, throughout 2018, as the November 6, 2019 research note recounted, the use of MSI was “beneficial for most metrics”, around the world and in the US/Canada region. It led to increases in social interactions, such as likes and comments between users, as well as other “critical ecosystem metrics,” such as the number of people using Facebook daily, revenue, and time users spent looking at their News Feeds. “long-term effects on democracy”?Yet Facebook quickly discovered that its emphasis on MSI wouldn’t have the same impact in every country and across every type of device. For instance, as that same November 2019 note stated, the company found in April 2018 that reliance on the metric was “hurting” Facebook’s daily active Android users in India; the authors of the note wrote that Facebook could make up for this loss by reducing its reliance on MSI and increasingly emphasizing videos it recommended in users’ feeds. By the end of September that year, the document said, Facebook had identified 11 countries where it used a “more balanced strategy” of MSI plus “appropriate amounts of video.” The note also pointed out that “the dynamics of Feed are changing constantly”, and in early 2019 the company’s ranking team concluded that optimizing for MSI “was no longer an effective tactic for growing sessions”; public-content ranking, it said was a “better strategy”. And less than a year after its launch, documents indicate Facebook knew there were deeper issues with relying on the metric. A November 2018 research memo titled, “Does Facebook reward outrage? Posts that generate negative comments get more clicks”, pointed out that an analysis that month showed more negative comments on posts linking to BuzzFeed led to more clicks on that link. Looking at 13 additional popular publishers and domains, the author of the research found the problem stretched far beyond BuzzFeed.The memo also pointed out that, because of this trend, some publishers may choose to capitalize on negativity. “With the incentives we create, some publishers will choose to do the right thing, while other will take the path that maximizes profits at the expense of their audience’s wellbeing,” the memo stated.”Ethical issues aside, empirically, the current set of financial incentives our algorithms create does not appear to be aligned with our mission,” the memo read, emphasis the author’s. “We can choose to be idle and keep feeding users fast-food, but that only works for so long; many have already caught on to the fact that fast-food is linked to obesity and, therefore, its short term value is not worth the long-term cost.” The move to MSI wasn’t just an issue for publishers: political parties were concerned, too. Another internal research note from April 1, 2019, pointed out that multiple European political parties claimed that the arrival of MSI in 2018 “changed the nature of politics. For the worse.”The parties argued, the note said, that by emphasizing resharing content, Facebook was “systematically” rewarding “provocative, low-quality content” and parties felt they needed to adjust by pumping out “far more negative content than before” because engagement on positive and policy posts had fallen dramatically. In Poland, for instance, the note said that one party’s social media management team estimated that its posts changed from half positive and half negative to 80% negative because of the algorithmic change. In Spain, the document said, parties reported feeling “trapped in an inescapable cycle of negative campaigning by the incentive structures of the platform.” “Many parties, including those that have shifted strongly to the negative, worry about the long-term effects on democracy,” the document read. Facebook employees were concerned about the impact of MSI, too. In a comment on that same document, a Facebook employee responded to the Spain data by saying it made their “heart cringe.” “I have seen the effect this has had on my mother and how she has become polarized. It’s hard to rally people to cries of ‘be reasonable’ … to find common ground,” they wrote.Over time, internal documents show, Facebook employees proposed changes to MSI, some of which may sound small but show the difficulty with assigning numbers to interactions. For example, the company’s reaction emojis, and the “haha” one in particular, don’t always land the same in every country. In a December 11, 2019 note on the company’s internal platform titled “We are Responsible for Viral Content”, the author wrote that “haha” reactions “are seen as insulting on non-humorous posts in Myanmar.” They included a cartoon translated from Burmese that, according to the note, read “You have been deferred from this year education because you reacted with ‘Haha’ to all my posts.” At the time, all reactions were still weighted the same, but the author noted at the time that “promising proposals are in the works to change this,” the document said. Changes were made to MSI numerous times after its launch, such as in early and late 2020; the Facebook spokeswoman said the formula behind it is “continually updated and refined based on new research and direct feedback from users.” A post in an internal employee group on September 15 of that year forecast changes planned for around October 1 intended “to make MSI capture more useful interactions.” These included filtering out some so-called “bad interactions,” such as deleted comments and single-character comments, and rejiggering the weights associated with reaction icons. Most notably, Facebook said it would make the “angry” reaction, which had previously been demoted to 1.5 points, worth zero points. A document from July 2020 laying out the proposal for weight revisions coming in the second half of the year gave a hint as to how the company landed on that decision. “After discussing with Comms to decide between Angry 0 vs 0.5 to see if one is more externally defendable, this is our final proposal on 2020/07/31,” it read, listing a zero value for “angry” in a table below.
The documents make clear that senior leadership, including CEO Mark Zuckerberg, were made aware of the potential for real-world harms from its various platforms — amplifying hate speech, encouraging eating disorders in teens, inciting violence — and did nothing about it. There’s little, if anything, in the revelations that looks good for Zuckerberg, the 37-year-old founder who built Facebook from a dorm room project into a nearly trillion-dollar company on the mantra “move fast and break things.” Outraged activists, pundits and lawmakers are demanding Zuckerberg take responsibility — the fish rots from the head down, after all. But holding Zuckerberg accountable is much easier said than done. For its part, Facebook has pushed back on many of the reports leaked to the media, saying they are misleading and mischaracterize its research and actions. On an earnings call Monday, Zuckerberg sought to reframe the so-called Facebook Papers as a “coordinated effort to selectively use leaked documents to paint a false picture of our company.” Toothless shareholdersFacebook’s tiered stock structure makes ousting Zuckerberg practically impossible. Although he owns less than half the company’s stock, the class of shares Zuckerberg holds vote with much more power than common stock. That means Zuckerberg controls a majority of the company’s voting shares. Even if the board and every shareholder united against him, Zuckerberg would still be able to get his way.”He’s a king, he’s not a CEO,” former Facebook employee Yael Eisenstat told Time earlier this month.His powerful position at the helm of Facebook, Instagram and WhatsApp gives Zuckerberg “unilateral control over 3 billion people,” Frances Haugen, the Facebook whistleblower, told UK lawmakers on Monday.And shareholders aren’t likely to complain too much anyway. Facebook, for all its faults, has made them immensely wealthy. Although Facebook stock has lagged behind tech competitors like Apple and Google, shares are up nearly 75% since October 2019. On Friday, a consortium of 17 US news organizations began publishing a series of stories — collectively called “The Facebook Papers” — based on a trove of hundreds of internal company documents which were included in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Haugen’s legal counsel. The consortium, which includes CNN, reviewed the redacted versions received by Congress.Wall Street shrugged Monday, as scathing headlines based on the Facebook Papers spread across the internet. But Tuesday, following Facebook’s earnings report in which it missed analysts’ expectations for sales, its stock fell 4% Tuesday. Investors care only about dollars and cents.Gridlock in DCIn Washington, lawmakers have been playing catch-up to try to regulate a company that has easily sidestepped government oversight. Lawmakers have introduced several pieces of bipartisan antitrust legislation in the House targeting Big Tech broadly. But Facebook’s structure is uniquely murky, even among tech companies, according to Haugen. “At other large tech companies like Google, any independent researcher can download from the Internet the company’s search results and write papers about what they find,” Haugen told Congress earlier this month. “But Facebook hides behind walls that keep researchers and regulators from understanding the true dynamics of their system.”In other words, it’s a complex problem that’d be hard to solve even if Congress weren’t hobbled by its own internal squabbling. The antitrust angle is slow-going, too. Over the summer, a federal judge tossed out the Federal Trade Commission’s case arguing that Facebook was a monopoly, citing lack of evidence. The FTC refiled its case, and Facebook again filed a motion to dismiss it earlier this month. Some have proposed an entirely new regulatory body focused on tech giants. “Digital companies complain (not without some merit) that current regulation with its rigid rules is incompatible with rapid technology developments,” writes Tom Wheeler, a former chairman of the Federal Communications Commission. “Oversight of digital platforms should not be a bolt-on to an existing agency but requires full-time specialized focus.”Tech leaders, including Zuckerberg, have expressed openness to the idea in the past. Of course, it’s easy to say yes to a hypothetical. And not everyone’s buying it. Facebook may countenance the idea of external regulation, but “at the same time it is fighting that regulation tooth and nail, day and night, with armies of lawyers, millions of dollars in lobbying,” said Senator Richard Blumenthal on CNN’s “Reliable Sources” Sunday. “Facebook saying it wants regulation is the height of disingenuousness.” Advertisers can’t afford to leaveBig advertisers could potentially score PR points with a boycott of the site, but even that is unlikely to make a major dent in Facebook’s bottom line. That’s because the vast majority of Facebook’s ad revenue comes from small businesses who can hardly afford to leave. In the summer of 2020, hundreds of household name brands boycotted the platform over its handling of hate speech in the #StopHateforProfit campaign. But Facebook’s stock prices and ad revenue have only grown since then. Facebook generated more than $28 billion in ad revenue during the third quarter alone. That’s up by 33% from a year ago. Changes afoot?The Facebook Papers offer some of the most damning evidence that Facebook is directly responsible for real, tangible harm. And worse, it’s been ignoring the harm for years. Far more than any previous scandals the company has weathered, this one feels like a turning point. But the resolution to this scandal won’t be swift or simple. If the whistleblower’s documents show us anything, it’s that there are meaningful concerns among some employees, including Haugen herself — and multiple whistleblowers are coming forward to keep pressure on the company. But of course it’s unclear if that alone would be enough.Washington won’t be able to suddenly resolve the regulation issues, and Wall Street isn’t going to turn its back on a money-making machine. Critically, the 3 billion people using Instagram, WhatsApp and Facebook aren’t going to turn the apps off on principle. For many, those apps have become vital tools for communication, synonymous with the internet itself. The company will keep spinning its own narrative and downplaying critics, and it may even work on addressing the problems in the meantime. But given Facebook’s scale — and its track record of evading regulation while minting money for shareholders — a full-scale reckoning seems unlikely. —CNN Business’ Clare Duffy, Paul R. La Monica and Matt Egan contributed to this report.
“To me, there’s nothing better than having a political discourse in plain and open view and having access to your elected officials, and being able to hold them accountable,” Gadde, then Twitter’s general counsel, told the audience at a New York University School of Law event. “In that sense, I think it’s a great thing because this wasn’t always possible before.””Now,” she added, “the consequences of that direct dialogue are unfolding in front of us and not something we could’ve quite predicted.” Less than three years later, the United States faced the most troubling consequence yet: A group of rioters attacked Capitol Hill on January 6 after Trump spent weeks using social media platforms to agitate his base and spread a lie that the 2020 election had been stolen. Gadde, who by then had become head of legal, policy and trust at Twitter, found herself at the center of deciding whether to take the unprecedented step of banning Trump from Twitter. Two days later, Twitter permanently banned Trump, citing a “risk of further incitement of violence.” The move was praised by civil rights advocates who called on Facebook and YouTube to follow Twitter’s lead — while others said the move should have come much sooner. Facebook (FB) had blocked Trump’s accounts “indefinitely” a day earlier, a suspension that was upheld in May by the company’s court-like Oversight Board and is up for review again in November. Google-owned YouTube announced a suspension of Trump’s channel a week later but has left the door open for him to return to the platform. Twitter’s decision to remove Trump didn’t happen immediately. The platform initially banned Trump for 12 hours on January 6. Twitter took heat when it let him back on and he quickly tweeted again, calling his followers who stormed the Capitol “patriots.” The Washington Post detailed a January 8 meeting in which Gadde made an “impassioned appeal” for staffers to have patience as her team deliberated what to do. Hours later, Twitter banned Trump permanently. Multiple outlets reported that Gadde played a central role in the Trump ban decision. Twitter CEO Jack Dorsey was reportedly vacationing on a private island at the time. (Twitter has said Dorsey was closely involved in the decision.) Asked this month about her role in banning Trump, a Twitter spokesperson told CNN Business: “Policy enforcement decisions are made by our Trust and Safety team, which report to Vijaya Gadde.” The spokesperson also said Twitter has “no plans to reinstate” Trump’s account.The Trump ban marked the boldest — and riskiest — decision in the tech firm’s 15-year history: cutting off a sitting world leader and its most high-profile user who had amassed nearly 89 million followers and driven massive attention to the platform. Not only did the ban risk pushback from Trump and regulators, it set a tough new standard for the company to live by in other countries. It also kicked off a larger debate about whether “deplatforming” actually works to prevent potential harms from social media platforms. But the decision also highlighted the disproportionate impact that Twitter, and Gadde, can have within the tech industry despite its comparatively small audience and resources.”It forced the hand of competitors like Facebook and like Google’s YouTube, which are much bigger companies in scale,” said Katie Paul, director of the nonprofit research organization Tech Transparency Project. “[Banning Trump] was an important moment for the company’s really setting a line and showing that they do have the power to shut down these things.” Now Twitter is facing similarly thorny questions in other major democracies around the world, including conflicts with governments in India and Nigeria. Gadde will likely be heavily involved in resolving these issues, too.”Vijaya is at the crossroads of some of the most important policy decisions the company is making and how it interacts with governments around the world … and how Twitter is thinking through the trust and safety of its platform,” Adam Bain, Twitter’s former COO who worked closely with Gadde before leaving the company in 2016, told CNN Business. “It’s an extremely important job at the company.” A ‘steady hand’ at TwitterGadde immigrated to the United States from India with her parents in the 1970s and grew up on the Gulf Coast of Texas. After attending Cornell University for industrial and labor relations and then NYU School of Law, she spent a decade working in corporate law. She was inspired by her aunt, one of India’s first female lawyers, she told the NYU audience.She joined Twitter in 2009, three years after it launched, motivated in part by her father-in-law in Egypt who had begun using Twitter as the country’s pro-democracy movement started to brew. Gadde first helped run the corporate legal department, playing a role in Twitter’s acquisitions and its 2013 IPO. That year, she became general counsel. Twitter is known for some volatility, with three CEOs during Gadde’s tenure. But inside Twitter, she has been “an extremely steady hand” and “the type of leader that people love working for and with,” according to Bain. “What she’s focused on is making the right decisions based on facts and the right process. She doesn’t predicate the outcome.”Beyond that, Bain said, she believes Twitter should be “as open” as possible with the world to “build trust,” as evidenced by her pushing for the company’s biannual transparency reports. Twitter declined to make Gadde available for this story.While Twitter is much bigger now than when Gadde joined, its audience and market cap remain less than a tenth the size of Facebook. Yet the two companies are often mentioned in the same breath given Twitter’s outsized importance shaping media and politics. And as Twitter’s influence has grown, so has Gadde’s. She’s met with lawmakers and regulators around the globe, including Trump in 2019. Gadde and her team may also be more empowered to make big moves at Twitter than she would be at other tech firms because of its smaller size.”It means they have smaller teams and fewer lobbying dollars to work with,” said Marietje Schaake, international policy director at Stanford University’s Cyber Policy Center and a former European Parliament member focused on trade and technology policies. “From my experience, the company has been more open to taking proactive steps in their own policies.”Gadde’s next battles Within six months, Twitter went from banning one president to being banned after taking action on another. The company was forced offline in Nigeria in June after taking down a controversial tweet by President Muhammadu Buhari, a ban that remains in place. Twitter has said it will “continue to engage” with the Nigerian government, but the company’s relatively muted stance has puzzled activists in the country.”They’ve been surprisingly quiet,” said Gbenga Sesan, executive director of the pan-African digital rights group Paradigm Initiative. “This would have been a good time to, you know, take a categorical stand.” Meanwhile, taking a stand has put Twitter on a knife edge between its principles and its business in one of its most important global markets: India.The company sparked a conflict there in February by taking down hundreds of accounts at the government’s behest but refusing to take action against journalists, activists and politicians, resulting in an uneasy stalemate that has now dragged on for months. India passed new technology rules making social media companies liable for what users post on their platforms and requiring each company to appoint designated compliance officials in the country. Twitter has sent mixed signals, initially pushing back and expressing concerns about a “potential threat to freedom of speech” but subsequently pledging to meet the new requirements. Some Indian tech advocates have described it as baffling and said this makes it harder to defend Twitter against what many see as overreach by the Indian government. Twitter is fully compliant with India’s rules and “remains committed to safeguarding the voices and privacy of those using our service,” the company spokesperson said. “Twitter leadership, including Vijaya, are continuing to engage in productive dialogue around these issues — and similar issues around the world.”Gadde has cautioned against suing the government, as Facebook-owned WhatsApp has done. She referred to litigation as a “blunt tool” at a virtual digital rights conference in June. “It’s a very delicate balance to draw when you want to actually be in court, versus when you want to negotiate and try to make sure that the government understands the perspective that you’re bringing,” Gadde said. “Because I do think you can lose a lot of control when you end up in litigation.” Twitter’s position in India remains precarious. Its presence there is much smaller than rivals like YouTube, Facebook and its subsidiary WhatsApp, which have hundreds of millions of Indian users, often making Twitter a convenient scapegoat. “If the [Indian government] were to go out and shut down WhatsApp, that would cause a significant backlash from the general public,” said Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University. “But shut down Twitter? Not as much.”There’s also more at stake for Twitter. Both India and Nigeria are among the world’s largest and fastest-growing internet user bases. The way Gadde’s team and Twitter resolve its challenges in those countries could have big implications for the company’s growth and the future of the internet, according to Paul of the Tech Transparency Project. “This is something that’s certainly going to be watched globally and [will be] the model for how companies deal with it moving forward,” she said.
When the trial started on September 8, 12 jurors and five alternates took in opening arguments and braced themselves for a trial that is expected to go until at least December. But the group entrusted with deciding the fate of Holmes — who faces a dozen federal fraud charges over allegations that she knowingly misled investors, doctors, and patients in order to take their money — has since shrunk. Three jurors have been dismissed so far. One juror was excused on the second day of the trial for financial hardship after being unable to swap her work schedule. Another was excused during week five after disclosing she was experiencing anxiety stemming from her religious beliefs as a Buddhist and any possible prison sentencing that may result from her decision as a juror.On Friday, a juror was excused after being reported by another juror for playing the puzzle game Sudoku while court was in session, according to a court transcript first cited by the Wall Street Journal. The juror, when questioned in a side room where reporters were not present, told Judge Edward Davila she is “very fidgety” and that it did not interfere with her attention to the trial. Holmes’ attorney initiated the request to dismiss the juror, which the prosecution then agreed to.It is enough to give some former prosecutors and legal experts pause. “The loss of three out of five alternates, at this point in the government’s case, puts the prospect of the trial going to a verdict at significant risk,” Mark MacDougall, a white-collar defense lawyer and former federal prosecutor, told CNN Business. Another former federal prosecutor, Jessica Roth, expressed a similar sentiment: “If I were a member of the prosecution team, I would be nervous at this point.”Roth, a professor at Cardozo Law School, cited the risk of a mistrial should there be fewer than 12 jurors — or if either side doesn’t consent to proceeding with fewer than 12 jurors. “You’d have to start all over again,” she said. A complicated trial gets more complicatedThis situation could introduce added complexities to what has already been a complex trial. In addition to the high level of interest in Holmes, there are a number of prominent individuals and companies ensnared in the case. The ongoing pandemic, and then Holmes’ pregnancy, also delayed the start of the trial several times. As is often the case when trying a well-known figure, securing an unbiased jury was a tall order the first time around. More than 80 potential jurors were questioned over the course of two days as part of the jury selection process in an effort to find people who’d largely not heard of Holmes. The former CEO once graced the cover of magazines and had been hailed as the next Steve Jobs. Since her indictment in 2018, she and Theranos have been the subject of a best-selling book, documentaries, and podcasts.Roth said she was surprised by the first two juror dismissals, as people with issues of financial hardship and religious beliefs that may conflict with their ability to serve are typically weeded out when potential jurors are first questioned in the courtroom by the judge and attorneys about their ability to serve. That process came after potential jurors first completed a nearly 28-page-long questionnaire. “Now, having there been this much of a trial, it’s even harder,” said Roth, of the idea of trying to find another pool of untainted jurors should there be a new trial. “You [also] have the difficulty of bringing back all of your witnesses again.”According to MacDougall, a mistrial can be a motivating factor for the government and defense to try to negotiate a plea deal to resolve the case. Holmes, who has pleaded not guilty, faces up to 20 years in prison.”No lawyer ever wants to try the same case twice,” said MacDougall, who also pointed out that the longer the trial, the more everyday human issues can creep in. “Jurors get sick, have family emergencies, become irritated with each other or do things that the court has told them not to do,” he said. Some of these issues have already come up. The second day of trial was delayed after a juror reported possible exposure to Covid-19. The trial will also not be in session this Friday, as it usually is, to accommodate a juror attending a memorial service of a family member who suddenly passed away. Even if the case doesn’t lose more jurors, some experts say the Sudoku incident could be cause for concern. George Demos, a former Securities and Exchange Commission prosecutor and adjunct law professor at the UC Davis School of Law, told CNN Business that a juror playing games during the trial could be a “significant red flag that the government may be losing its audience.” In an effort to speed things up, Judge Davila has begun lengthening the time jurors spend in the courtroom on trial days by an hour or more. On Tuesday, he tacked on an additional court day for several weeks in November. Asked whether there were any objections, one of the remaining alternate jurors said he’d try to accommodate it if he was the only one experiencing difficulties, but noted it is “getting hard on my work schedule.”
“What they are doing is willfully turning a blind eye to catastrophic problems that would not exist if they were not willing to advertise on these platforms,” McNamee, an early investor in Facebook, told CNN on Tuesday in a phone interview. “The major advertisers have a lot of guilt here.”McNamee, who began advising Facebook (FB) co-founder Mark Zuckerberg in early 2006, said the problem is that the vast majority of companies prioritize maximizing shareholder value instead of what’s good for society.”They allowed themselves to get addicted to the convenience of the products, so they go along with it,” said McNamee, who co-founded venture capital firm Elevation Partners.Facebook generated $28.3 billion in advertising revenue during the third quarter alone, the company said Monday. That’s up by 33% from a year ago and underscores the vast scale of a platform that has a staggering 1.9 billion daily active users.”Facebook is terrible, but I have to maximize shareholder value, so I have to use it. They tell themselves this is not their fault. It’s really Facebook’s. No, it is your fault,” McNamee said.’More work to do’The comments come after a consortium of 17 US news organizations began publishing articles based on the Facebook Papers, a trove of hundreds of internal documents that were included in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Facebook whistleblower Frances Haugen’s legal counsel. The consortium, which includes CNN, reviewed the redacted versions received by Congress. CNN’s coverage includes stories about how coordinated groups on Facebook sow discord and violence, including on January 6, as well as Facebook’s challenges moderating content in some non-English-speaking countries, and how human traffickers have used its platforms to exploit people.In a statement, a Facebook spokesperson said the company’s “work on safety and security is the most comprehensive effort to remove hate speech of any major consumer technology company.””We have more than 40,000 people focused on this and are on track to spend more than $5 billion on safety and security in 2021. While we have more work to do, we remain committed to getting this right,” the Facebook spokesperson said. Sonnenfeld: ‘They must sever ties’Still, Jeffrey Sonnenfeld, a professor at the Yale School of Management who regularly talks to CEOs about controversial topics, called for business leaders to take action following the recent revelations at Facebook.”They must sever ties as they are complicit tacitly with some of the worst human atrocities, human rights violations and subversion of democracy,” Sonnenfeld told CNN on Monday.But major companies are not the only source of Facebook’s ad revenue. Countless small businesses rely on Facebook to reach customers. And these smaller firms may not have the luxury of saying goodbye to the unparalleled reach of Facebook.That means Facebook is no longer reliant on huge ad spends from major brands. It has an army of small-and-pop advertisers to fall back on.Another boycott?Hundreds of companies joined an ad boycott against Facebook last year that was organized by Stop Hate for Profit, a campaign that aimed to hold the company accountable for failures to address the incitement of violence on the platform. “You may see one again,” McNamee, who advises Stop Hate for Profit, said of a potential ad boycott of Facebook. “I wouldn’t rule it out.”Jonathan Greenblatt, the CEO of the Anti-Defamation League, which helped launch Stop Hate for Profit, told CNN on Monday that the ADL is in talks with members of its coalition to “explore the appropriate response” to the Facebook Papers. “There are things advertisers can do to demonstrate their discontent,” Greenblatt said.Although many companies resumed advertising on Facebook after pausing last year during the boycott, some brands have not returned.Patagonia told CNN on Monday that it halted advertising on Facebook and Instagram in June 2020 and has no plans to resume ad spending on the platforms.Advertisers “do have leverage,” McNamee said. “They’re just choosing not to use it.”
Most stock quote data provided by BATS. Market indices are shown in real time, except for the DJIA, which is delayed by two minutes. All times are ET. Disclaimer. Morningstar: Copyright 2018 Morningstar, Inc. All Rights Reserved. Factset: FactSet Research Systems Inc.2018. All rights reserved. Chicago Mercantile Association: Certain market data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Dow Jones: The Dow Jones branded indices are proprietary to and are calculated, distributed and marketed by DJI Opco, a subsidiary of S&P Dow Jones Indices LLC and have been licensed for use to S&P Opco, LLC and CNN. Standard & Poor’s and S&P are registered trademarks of Standard & Poor’s Financial Services LLC and Dow Jones is a registered trademark of Dow Jones Trademark Holdings LLC. All content of the Dow Jones branded indices Copyright S&P Dow Jones Indices LLC 2018 and/or its affiliates.©
Getting a fuller picture would enable us to navigate ships more safely, create more accurate climate models, lay down telecommunication cables, build offshore windfarms and protect marine species — all part of what’s known as the “blue economy,” projected to be worth $3 trillion by 2030.Underwater robotic vehicles equipped with sensors are helping gather that data quicker and more cheaply than ever before. But many of these vehicles rely on batteries with a limited lifespan, and need to return to a boat or the shore to recharge, making it difficult for them to map more remote parts of the sea. A five-year-old startup called Seatrec is rising to the challenge, founded by oceanographer Yi Chao. While working at NASA, he developed technology to power ocean robots by harnessing “the naturally occurring temperature difference” of the sea, Chao told CNN Business. Greener and cheaperThe power module can be installed on existing data-gathering robots or Seatrec’s own floating device. This dives a kilometer down to examine the chemistry and shape of the seabed, using sonar to create a map of the surrounding area. The robot returns to the surface to send back its findings via satellite.As the float moves between colder and warmer parts of the ocean, material inside the module either melts or solidifies, causing pressure that in turn creates thermal energy and powers the robot’s generator.”They get charged by the sea, so they can extend their lifetime almost indefinitely,” Chao said.A basic float model typically costs around $20,000. Attaching Seatrec’s energy system adds another $25,000, Chao said.But the access to free, renewable energy and the ability to stay in the water longer makes data gathering up to five times cheaper in the long run, according to Chao. He said the startup is making fewer than 100 devices per year, primarily for marine researchers, but the technology is easily scalable — Seatrec’s energy module can also be can be retrofitted onto existing mapping devices to extend their range. Picking up the paceNew technologies that can extend the reach of data-gathering devices are crucial for mapping more remote parts of the deep sea, according Jamie McMichael-Phillips, director of the Nippon Foundation-GEBCO Seabed 2030 Project. “One of the huge challenges we have is quite simply physics,” said McMichael-Phillips. “Unlike mapping the Earth’s surface where we can use a camera [or] satellites, at sea, light does not penetrate through the water column. So we’re pretty much limited to using sonar systems.” Launched in 2017, the Seabed 2030 Project has increased awareness about the importance of the ocean floor, and given researchers and companies a clear goal to work towards: map the entire seafloor by the end of this decade. Some companies, such as XOCEAN, are surveying the ocean from the surface. Another startup, Bedrock Ocean Exploration, says it can provide surveys of seabed areas up to 10 times faster than traditional methods by using an autonomous electric submarine fitted with sonars, cameras and lasers; the data is then analyzed on Bedrock’s own cloud platform.The challenge aheadEven with the growing number of technologies accelerating seabed exploration, completing the map is still a logistical and financial challenge. Chao estimates that it would take 3,000 of Seatrec’s floats operating over the next 10 years to fully survey the ocean. The company has raised $2 million in seed funding to scale up production of its energy harvesting system.But this is a drop in the ocean of the capital needed to fully survey the ocean, which is estimated to be “somewhere between $3 to $5 billion,” according to McMichael-Phillips — “pretty much the same order of magnitude as the cost of sending a mission to Mars.”Bedrock’s DiMare believes it’s time we start investing in our own planet.”If we want to keep Earth as a place that humans can live,” he said, “we have got to get a lot smarter about what’s going on in the ocean.”
But internal Facebook (FB) documents suggest a disconnect between what the company has said publicly about its overall response to Covid-19 misinformation and some of its employees’ findings concerning the issue. “We have no idea about the scale of the [Covid-19 vaccine hesitancy] problem when it comes to comments,” an internal research report posted to Facebook’s internal site in February 2021, a year into the pandemic, noted. “Our internal systems are not yet identifying, demoting and/or removing anti-vaccine comments often enough,” the report pointed out. Additional reports a month later raised concerns about the prevalence of vaccine hesitancy — which in some cases may amount to misinformation — in comments, which employees said Facebook’s systems were less equipped to moderate than posts. “Our ability to detect vaccine hesitancy comments is bad in English and basically non-existent elsewhere,” one of the March 2021 reports stated.The documents were included as part of disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Facebook whistleblower Frances Haugen’s legal counsel. A consortium of news organizations, including CNN, has reviewed the redacted versions received by Congress. The World Health Organization (WHO) began describing Covid-19 misinformation as an “infodemic” in the early stages of the pandemic last year, amid a flood of social media posts on conspiracy theories about the origins of the virus, dangerous advice about faulty treatments and unreliable reports on vaccines. The organization called on big tech firms to give it a direct line to flag posts on their platforms that could harm people’s health. CEO Mark Zuckerberg posted on Facebook on March 3, 2020, that his company was working with the WHO and other leading health organizations to help promote accurate information about the virus. At the time, there were only around 90,000 recorded cases globally and about 3,100 known deaths, most of them in China. Approved vaccines were still months away. But the company was already grappling with the spread of misinformation and myths about Covid-19. “As our community standards make clear, it’s not okay to share something that puts people in danger,” Zuckerberg wrote. “So we’re removing false claims and conspiracy theories that have been flagged by leading global health organizations.” He added that the company planned to give the WHO “as many free ads as they need for their coronavirus response along with other in-kind support,” and would give “millions more in ad credits” to other authoritative organizations, too. But a flood of comments raising questions and illegitimate concerns about vaccines on the platform meant that, in some cases, those organizations didn’t want to take advantage of that free help. One of the March 2021 internal reports noted that the rate of vaccine hesitancy comments was so high on Facebook posts “that authoritative health actors, like UNICEF and the WHO, will not use free ad spend we are providing to them to promote pro-vaccine content, because they do not want to encourage the anti-vaccine commenters that swarm their Pages.” Facebook employees were concerned that while the company’s AI systems were trained to detect misinformation in posts, the same wasn’t true for comments, which may be more likely to have vaccine-hesitant content, documents show. “The aggregate risk from [vaccine hesitancy] in comments may be higher than that from posts, and yet we have under-invested in preventing hesitancy in comments compared to our investment in content,” another March 2021 report stated. “One flag from UNICEF was the disparity between FB and IG,” one comment on the report stated, “where they said this: ‘One of the ways we manage these scenarios on Instagram is though pinning top comments. Pinning helps us highlight our top comment (which will almost always be a link to helpful vaccine information) and also highlight other top comments which are pro-vaccine.'” UNICEF and the WHO did not respond to requests for comment. A Facebook spokesperson said the company had made improvements on issues raised in the internal memos included in this report and said: “We approach the challenge of misinformation in comments through policies that help us remove or reduce the visibility of false or potentially misleading information, while also promoting reliable information and giving people control over the comments in their posts. There are no one-size-fits-all solutions to stopping the spread of misinformation, but we’re committed to building new tools and policies that help make comments sections safer.” Among other efforts, Facebook — as well as fellow social media giants Twitter and YouTube — has added Covid-19 misinformation to its “strike policy” under which users can get suspended (and potentially removed) for posting violating content since the pandemic began. The platforms also started labeling content related to Covid-19 to direct users to information from authoritative sources. Earlier this year it emerged that Facebook shelved the public release of a “transparency” report after it revealed that the most-viewed link on the platform in the first quarter of 2021 was a news article that said a doctor died after receiving the coronavirus vaccine, the New York Times reported. Irresponsible and sensationalist news media coverage of purported dangers associated with Covid-19 vaccines also appear to be richly rewarded on Facebook. The internal February memo noted a tabloid story about vaccine deaths had been shared more than 130,000 times on the platform. The company’s challenges have not only been tied to comments and news articles, however. As of May last year, the “most active” civic groups in the United States “have been the hundreds of anti-quarantine groups in addition to the standard set that have been most active for months/years (Trump 2020, Tucker Carlson, etc.),” according to a May 18, 2020 post to Facebook’s internal site. The post’s author wrote how these groups were rife with Covid-19 misinformation and noted that content from the groups were featuring heavily in the Facebook feeds of “the tens of millions of Americans who are now members of them.” A Facebook spokesperson told CNN the company had added new safety controls to groups since that internal May 2020 post. In July 2021, Biden said platforms like Facebook were “killing people” with Covid-19 misinformation. Biden later backed away from that claim, but not before a Facebook executive published a strong rebuke of the President on the company’s website. “At a time when COVID-19 cases are rising in America, the Biden administration has chosen to blame a handful of American social media companies,” Guy Rosen, Facebook vice president of integrity, wrote. “While social media plays an important role in society, it is clear that we need a whole of society approach to end this pandemic. And facts — not allegations — should help inform that effort.” He added that when Facebook sees misinformation about Covid-19 vaccines, “we take action against it,” and pointed to research the company conducted with Carnegie Mellon that shows a great majority of US Facebook users had been or wanted to be vaccinated. Still, the February 2021 internal report about the prevalence of vaccine-hesitant or anti-vaccine messages in Facebook comments suggested that “anti-vax sentiment is overrepresented in comments on Facebook relative to the broader population” in the United States and United Kingdom. “This overrepresentation may convey that it is normative to be hesitant of the Covid-19 vaccine and encourage greater vaccine hesitancy,” the report stated.
A problem with the Crew Dragon toilet was first identified during SpaceX’s Inspiration4 mission in September, which carried four people on the first all-tourism mission to orbit, where they spent three days.Jared Isaacman, the commander and financier of the “Inspiration4″ mission, as it was called, told CNN Business last month that an alarm went off during the mission, alerting the crew to an unforeseen problem with the toilet’s fan. He said he and his fellow passengers had to work with SpaceX controllers on the ground to troubleshoot. The issue did not cause any serious problems for the Inspiration4 crew, nor were there any instances of bodily fluids getting loose inside the capsule.”I want to be 100% clear: There were no issues in the cabin at all as it relates to that,” Isaacman said.But after the Inspiration4 crew’s return to Earth, SpaceX disassembled their spacecraft to further inspect what might have gone wrong.”There’s a storage tank where the the urine goes to be stored [and] there’s a tube that came disconnected or came unglued,” said William Gerstenmaier, a former associate administrator at NASA who now works as SpaceX’s head of mission assurance. “That allowed urine essentially to not go into the storage tank, but essentially go into the fan system.”The situation highlights how spacecraft that have conducted all the necessary test flights, been vetted and approved, and even completed full missions can still prove to have unforeseen design risks. Fans are used on spacecraft toilets to create suction and control the flow of urine because, in the microgravity environment of space, waste can — and does — go in every possible direction.In this particular case, the Inspiration4 crew, however, did not notice any excreta floating around the cabin because the leakage was still relegated to contained areas underneath the floor, Gerstenmaier said. But when a SpaceX team pried up the floor, they confirmed there was “contamination,” he added.As it turns out, another Crew Dragon capsule that was launched earlier but is still in space is also dealing with a similar leak.The capsule that had been used to take professional astronauts to the International Space Station in April, on a mission dubbed Crew-2, experienced similar problems. But because those astronauts only had to rely on Crew Dragon’s on-board toilet while in transit to the ISS — rather than the full three days the Inspiration4 crew spent aboard their capsule — the mess wasn’t as bad.That Crew Dragon capsule, however, is still attached to the ISS and will have to be used to return that crew of astronauts to Earth next month. So, SpaceX ran a series of ground tests to make sure the Crew Dragon’s aluminum structure could hold up to the leaked urine and that the substance hadn’t become dangerously corrosive.Basically, SpaceX researchers coated some pieces of metal in urine mixed with Oxone — the same substance used to remove ammonia from urine on board Crew Dragon — to see how it would react with the aluminum. They put it inside a chamber to mimic the vacuum of space, and they found limited corrosion, Gerstenmaier said. “We’ll double check things, we’ll triple checks things, and we got a couple more samples we’ll pull out of the chambers and inspect,” he said. “But we’ll be ready to go and make sure the crew is safe to return.”And to make sure that such a situation never crops up again, SpaceX designed a fix for its upcoming mission, opting to weld the wayward tube in place to eliminate the risk of it coming unglued. NASA still has to give a final OK for that fix, but, all things considered, NASA and SpaceX officials are confident these issues will be put to rest, and this weekend’s mission will move forward.In this case, “that Inspiration4 flight was really a gift for us,” said Steve Stich, NASA’s Commercial Crew Program manager.This weekend’s flight will mark the fourth SpaceX Crew Dragon mission to carry professional astronauts to the space station as part of a deal SpaceX inked with NASA. On board will be NASA astronauts Raja Chari, Tom Marshburn, Kayla Barron, as well as ESA (European Space Agency) astronaut Matthias Maurer. They’ll spend six months aboard the ISS, continuing the space station’s 20-year history of hosting astronauts from all over the world to conduct scientific research.