Browsing: Technology

The Beijing-based company announced late Friday that co-founder Su Hua was stepping down from the role, while staying on as chairman of the board. Su’s fellow co-founder, Cheng Yixiao, was appointed chief executive, which took effect on Friday. Kuaishou said that Su would be dedicating more time going forward to its long-term strategy, while Cheng assumed responsibility of day-to-day operations.Su’s departure is another example of how some of the country’s top C-suites have gotten a refresh amid a widening regulatory clampdown.In September, e-commerce giant (JD) said that billionaire CEO Richard Liu would shift some of his focus to long-term strategy as the company took on a new president to help manage its daily affairs.Earlier this year, Zhang Yiming, the founder of TikTok owner ByteDance, also announced that he would step down as CEO to take a less prominent role in the company. And Pinduoduo (PDD) founder Colin Huang said in March that he would resign as chairman of an upstart e-commerce company that competes with the likes of Alibaba (BABA). Huang had already left his position as CEO last summer.Zhang and Huang said they were departing to try new things. Neither of them referred to the government crackdown in their announcements, while a ByteDance spokesperson said that Zhang’s decision to step down was not related to regulatory moves in China.Kuaishou is one of China’s leading social media firms. The Tencent-backed company, whose name means “fast hand” in Chinese, owns an eponymous short-video and live-streaming app with about 300 million daily active users. The company went public in an approximately $5.3 billion Hong Kong listing earlier this year, marking what was then the world’s biggest IPO since 2019.Kuaishou shares fell nearly 4% in Hong Kong on Monday, the first trading day after the announcement.— Laura He and Jill Disis contributed to this report.

The DeVos family, whose fortune stems from multilevel-marketing company Amway, is one of several big name investors who backed Theranos. Media tycoon Rupert Murdoch, another potential government witness, is also reported to have sunk $125 million into Theranos.The DeVos family invested in the startup in 2014 through its family office, RDV Corp, after a family member had her blood drawn. Lisa Peterson, a managing director at RDV who helped vet the deal, testified about the investment — and due to a juror conflict and a plumbing issue, she was also the only witness introduced this week. Holmes, once hailed as the next Steve Jobs, is facing a dozen federal fraud charges over allegations that she knowingly misled investors, doctors, and patients about her company’s blood testing capabilities in order to take their money. Holmes has pleaded not guilty and faces up to 20 years in prison.Here are the highlights from week eight of the trial:Holmes targeted wealthy familiesPeterson testified that she requested to work on the Theranos deal after first hearing about the company from RDV CEO Jerry Tubergen, who had met with Holmes and her brother at a conference. Tubergen’s excitement about Holmes was evident in an email sent to members of the DeVos family. “This morning I had one of the most interesting meetings I can recall with the women [sic] profiled in the attached Fortune magazine article,” Tubergen wrote in the email, which was shown in court.Peterson testified that Holmes handpicked wealthy families to invest and that “she was inviting us to participate in this opportunity.” In an email sent to DeVos family members, Tubergen wrote that heirs to the Walmart fortune were also investing in the round: “Walton family for sure (I’m thinking nice synergy there).” (The Waltons reportedly invested $150 million in the company.)Peterson said she did due diligence on the deal, including compiling a memo with information from two binders Theranos sent, notes from a call with Holmes, and online research. According to her memo, Holmes was looking for long-term investors interested in its mission to “‘do well and do good’ over the course of multiple generations.”Holmes’ attorney, Lance Wade, repeatedly questioned the thoroughness and adequateness of Peterson’s due diligence process in what became an increasingly tense exchange, which is set to continue when court resumes next week. He quizzed her on whether her due diligence was “exhaustive,” “thorough,” “adequate,” as well as questioned why she hadn’t visited a Walgreens or hired regulatory or medical experts. In her testimony, Peterson said she trusted the information Theranos and Holmes provided — including a report with Pfizer’s logo on it, which appeared to validate the startup’s technology. Jurors learned last week that was not the case.Peterson, Tubergen, and three DeVos family members flew to Theranos’ headquarters in October, 2014 before investing $100 million, double the amount they’d originally anticipated. Peterson said while there, Cheri DeVos had her blood taken by finger stick. Emails shown in court reveal a sense of urgency to move quickly and not miss the opportunity to invest. Bryan Tolbert, who testified the previous week, also spoke of a time crunch: He was given a very limited window of time to make a decision in late 2013. Tolbert’s and the DeVos’ investments mark two of the six wire fraud counts including in the 12 criminal charges that Holmes is facing.Jurors gets a closer glimpse of Elizabeth HolmesWhile it is still unclear if jurors will hear from Holmes herself once the defense gets its turn to introduce witnesses, they are getting a closer look at how she presented herself and the company in her own words. Last week, the jury heard audio clips of Holmes from an investor call — the first time they’d heard her infamous voice. This week, they saw TV interviews Holmes gave after the initial Wall Street Journal investigation into Theranos.Clips from “Mad Money” and “The Today Show” showed how Holmes presented the company publicly. “I’m the founder and CEO of the company — anything that happens in this company is my responsibility,” she said, in the April 2016 Today Show clip. The comment potentially contradicts her defense, which has pointed the finger at others for the company’s failings.A long delayed trial can’t catch a break…The pandemic, and Holmes’ pregnancy, have dealt the trial several delays. This week, there was another unexpected holdup: a pipe burst near the San Jose federal courthouse, leaving the building without water. The court was ordered to vacate the building.The incident came as Judge Edward Davila tacked on extra court days to move things along as the trial — initially projected to span three to fourth months — is about to enter its third month. When soliciting concerns from jurors about the schedule additions, one alternate juror said he’d try to accommodate it if he was the only one experiencing difficulties, but noted it is “getting hard on my work schedule.” The judge informed jurors that he’d like to further stretch the number of hours they’re spending in the courtroom where possible to keep things progressing ahead of the holidays. With the pool of jurors down to 14 from 17 at the start of the trial, experts say the longer the trial takes, the more life issues may creep in….and neither can reporters covering the trialThe Holmes’ trial has been hit by another looming drama: tension over loud typers.Judge Davila has repeatedly expressed frustration on behalf of one or more members of the jury about loud keyboard strokes emanating from the small but mighty group of reporters who show up with their laptops day in and day out. The typing apparently grows increasingly noticeable when reporters are documenting the same juicy bits in tandem. On Tuesday, the judge once again warned reporters that only “silent keyboards” are allowed in the room. The judge said if he received another complaint, he’d have to send “anyone who wants to type” to the overflow room. He specified that it is not fair to the government or Holmes if the jury cannot concentrate.It marks yet another hurdle for reporters covering the high-profile trial given that cameras and recording devices are not permitted. And while the judge asked reporters to police themselves, a US marshal stood in the corner of the courtroom at various points throughout the day to scope out the noisy typer, or typers.

Many Twitter users scoffed at the social media company’s rebrand — revealed by founder Mark Zuckerberg earlier this week — using the hashtag #FacebookDead. “Somebody did not do their #branding research,” one post read. Dr Nirit Weiss-Blatt, author of The Techlash and Tech Crisis Communication, tweeted: “In Hebrew, *Meta* means *Dead* The Jewish community will ridicule this name for years to come.””Grave error?? Facebook’s new name Meta means dead in Hebrew. Hilarious. #FacebookDead” another user tweeted. Zuckerberg’s efforts to revamp Facebook come as the company faces what could be its most potent scandal since it launched in 2004.The social media giant is under the spotlight following the publication this week of “The Facebook Papers,” a series of internal documents obtained by 17 news organizations, including CNN, that underpin whistleblower Frances Haugen’s claims the company is riddled with institutional shortcomings.The documents reveal how Facebook has propelled misinformation, struggled to eliminate human trafficking-related content on the site, and tried to increase its teenage audience, despite internal research suggesting that its platforms, especially Instagram, can have an adverse effect on their mental health. Facebook isn’t the first company to be ridiculed after its branding didn’t translate abroad.In 2019, Kim Kardashian West was accused of cultural appropriation after debuting her shapewear brand, which she initially named Kimono. Kardashian even appeared to have trademarked the word “kimono,” a decision that the mayor of Kyoto, Daisaku Kadokawa, criticized in an open letter on Facebook. “We think that the names for ‘Kimono’ are the asset shared with all humanity who love Kimono and its culture therefore they should not be monopolized,” Kadokawa wrote.Kardashian changed the name of her brand to Skims later that year. In 2017, McDonald’s name change in China raised eyebrows. Customers were left confused when the company swapped Maidanglao, a Chinese iteration of the English name, to Jingongmen, which loosely translates to “Golden Arches.” One customer said it “sounds like a furniture store.” And when the Nissan Moco was launched in the early 2000s, Spanish-speaking customers may have looked twice, as the word “moco” translates to “bogey.” Needless to say, the name was only used in Japan.

These documents are disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Facebook whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by the consortium. The documents, and the stories based on them, raised concerns about the potential real-world harms from Facebook’s various platforms. They also offer insight into the inner workings of the company, including its approach to misinformation and hate speech moderation, both in the US and internationally, as well as employee reactions to concerns about company decisions. The Wall Street Journal previously published a series of stories based on internal Facebook documents leaked by Haugen, which raised concerns about the impact of Instagram on teen girls, among other issues. (The consortium’s work is based on many of the same documents.) Facebook has repeatedly tried to discredit Haugen and said reports on the documents mischaracterize its actions. “At the heart of these stories is a premise which is false,” a Facebook spokesperson previously said in a statement to CNN. “Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or wellbeing misunderstands where our own commercial interests lie.” Here are some key quotes from the documents provided so far. Vaccine misinformation In February 2021, a year into the pandemic, a Facebook research report was shared internally and noted a concern: “Our internal systems are not yet identifying, demoting and/or removing anti-vaccine comments often enough.” Another report a month later stated: “The aggregate risk from [vaccine hesitancy] in comments may be higher than that from posts, and yet we have under-invested in preventing hesitancy in comments compared to our investment in content.” A Facebook spokesperson said the company has made improvement on the issues raised in the internal memos. Human trafficking According to one internal report from January 2020 entitled “Domestic Servitude and Labor Trafficking in the Middle East,” a Facebook investigation found the following: “Our platform enables all three stages of the human exploitation lifecycle (recruitment, facilitation, exploitation) via complex real-world networks. … The traffickers, recruiters and facilitators from these ‘agencies’ used FB [Facebook] profiles, IG [Instagram] profiles, Pages, Messenger and WhatsApp.” A Facebook spokesperson told CNN: “We prohibit human exploitation in no uncertain terms.” The spokesperson said it has “been combatting human trafficking on our platform for many years.” The algorithm’s impactAn internal research note from April 2019 pointed out that multiple European political parties claimed Facebook’s 2018 decision to refocus its News Feed algorithm on a new metric, referred to as “meaningful social interactions,” had “changed the nature of politics. For the worse.” Facebook told CNN the introduction of the metric wasn’t a “sea change” in how the company ranked users’ activity on the social network as it previously considered likes, comments, and shares as part of its ranking. Gaps in international coverage In June 2020, a Facebook employee posted a report to an internal group with about 1,500 members noting an ongoing audit into the effectiveness of its signals to detect misinformation and hate speech in at-risk countries. According to the report, the audit “found significant gaps in our coverage (especially in Myanmar & Ethiopia), showcasing that our current signals may be inadequate.” In a public statement addressing reports concerning the leaked research, Facebook said, “We have an industry-leading process for reviewing and prioritizing countries with the highest risk of offline harm and violence, every six months. When we respond to a crisis, we deploy country-specific support as needed.” Internal reactions to January 6th insurrection Commenting on a post about the Capitol insurrection written by Mike Schroepfer, Facebook’s chief technology officer, one staffer wrote: “Leadership overrides research based policy decisions to better serve people like the groups inciting violence today. Rank and file workers have done their part to identify changes to improve our platforms but have been actively held back.” Another staffer, referencing years of controversial and questionable decision-making by Facebook leadership around political speech concluded, “history will not judge us kindly.” In response to the documents, Facebook told CNN, “the responsibility for the violence that occurred on January 6 lies with those who attacked our Capitol and those who encouraged them.” The company also stressed the steps it took before and after the insurrection to crack down on content related to the “Stop the Steal” movement.

Joshua Streit, 30, threatened to publicize alleged vulnerabilities in the MLB’s IT systems unless the league paid him $150,000, according to charging documents. Streit is also charged with illegally streaming copyrighted live games from the MLB, National Basketball Association, National Football League and the National Hockey League. Prosecutors alleged that Streit used stolen login credentials to access the sports’ websites and stream live games to his own website for profit. “One of the victim sports leagues sustained losses of approximately $3 million due to STREIT’s conduct,” the Southern District of New York said in a press release. The MLB is no stranger to hacking scandals. Christopher Correa, the former director of scouting for the St. Louis Cardinals, was sentenced to nearly four years in prison in 2016 for hacking into the Houston Astros’ scouting records.

The fact-finding mission, which was described by one of the researchers in an internal document seen by CNN, took place at an important moment for the country, and for Facebook’s operations within it. India’s national elections, the biggest in the world, were just months away — and Facebook was already bracing for potential trouble. The year prior, a spate of lynchings triggered by viral hoax messages on Facebook-owned WhatsApp had put the company at the center of a debate about misinformation in the country. In February, 2019, with the election approaching, WhatsApp announced it was deploying artificial intelligence to clean up its platform. It also warned Indian political parties their accounts could be blocked if they tried to abuse the platform while campaigning. Against that backdrop, Facebook’s researchers interviewed over two dozen users and found some underlying issues potentially complicating efforts to rein in misinformation in India. “Users were explicit about their motivations to support their political parties,” the researchers wrote in an internal research report seen by CNN. “They were also skeptical of experts as trusted sources. Experts were seen as vulnerable to suspicious goals and motivations.” One person interviewed by the researchers was quoted as saying: “As a supporter you believe whatever your side says.” Another interviewee, referencing India’s popular but controversial Prime Minister Narendra Modi, said: “If I get 50 Modi notifications, I’ll share them all.” The document is part of disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Facebook whistleblower Frances Haugen’s legal counsel. A consortium of 17 US news organizations, including CNN, has reviewed the redacted versions received by Congress. The conversations reveal some of the same societal issues present in the United States that are sometimes viewed both as products of algorithmic social media feeds and complicating factors for improving them. These include nationalist parties, incendiary politicians, polarized communities and some distrust of experts. There have been widespread concerns globally that Facebook has deepened political divisions and that its efforts to fact-check information often make people double down on their beliefs, some of which were reflected in the research document. (Most of the Indian interviewees, however, also said they wanted Facebook “to help them identify misinfo on the platform.”) Facebook also faced two fundamental problems in India that it did not have in the United States, where the company is based: understanding the many local languages and combatting distrust for operating as an outsider. In India, English literacy is estimated to be around 10%, Facebook’s automated systems aren’t equipped to handle most of the country’s 22 officially recognized languages, and its teams often miss crucial local context, a fact highlighted in other internal documents and partly acknowledged by the misinformation researchers. “We faced serious language issues,” the researchers wrote, adding that the users they interviewed mostly had their Facebook profiles set to English, “despite acknowledging how much it hinders their understanding and influences their trust.” Some Indian users interviewed by researchers also said they didn’t trust Facebook to serve them accurate information about local matters. “Facebook was seen as a large international company who would be relatively slow to communicate the best information related to regional news,” the researchers wrote. Facebook spokesperson Andy Stone told CNN Business that the study was “part of a broader effort” to understand how Indian users reacted to misinformation warning labels on content flagged by Facebook’s third-party fact checkers. “This work informed a change we made,” Stone said. “In October 2019 in the US and then expanded globally shortly thereafter, we began applying more prominent labels.” Stone said Facebook doesn’t break out content review data by country, but he said the company has over 15,000 people reviewing content worldwide, “including in 20 Indian languages.” The company currently partners with 10 independent fact-checking organizations in India, he added. Warnings about hate speech and misinformation in Facebook’s biggest market India is a crucial market for Facebook. With more than 400 million users across the company’s various platforms, the country is Facebook’s largest single audience. India has more than 800 million internet users and roughly half a billion people yet to come online, making it a centerpiece of Facebook’s push for global growth. Facebook’s expansion in the country includes a $5.7 billion investment last year to partner with a digital technology company owned by India’s richest man. But the country’s sheer size and diversity, along with an uptick in anti-Muslim sentiment under Modi’s right-wing Hindu nationalist government, have magnified Facebook’s struggles to keep people safe and served as a prime example of its missteps in more volatile developing countries. The documents obtained by CNN and other news outlets, known as The Facebook Papers, show the company’s researchers and other employees repeatedly flagging issues with misinformation and hate speech in India. For example, Facebook researchers released a report internally earlier this year from the Indian state of Assam, in partnership with local researchers from the organization Global Voices ahead of state elections in April. It flagged concerns with “ethnic, religious and linguistic fear-mongering” directed toward “targets perceived as ‘Bengali immigrants'” crossing over the border from neighboring Bangladesh. The local researchers found posts on Facebook against Bengali speakers in Assam with “many racist comments, including some calling for Hindu Bengalis to be sent ‘back’ to Bangladesh or killed.” “Bengali-speaking Muslims face the worst of it in Assam,” the local researchers said.Facebook researchers reported further anti-Muslim hate speech and misinformation across India. Other documents noted “a number of dehumanizing posts” that compared Muslims to “pigs” and “dogs” and false claims that the “Quran calls for men to rape their female family members.” The company faced issues with language on those posts as well, with researchers noting that “our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned.”Some of the documents were previously reported by the Wall Street Journal and other news outlets.”An Indian Test User’s Descent Into a Sea of Polarizing, Nationalistic Messages” Facebook’s efforts around the 2019 election appeared to largely pay off. In a May 2019 note, Facebook researchers hailed the “40 teams and close to 300 people” who ensured a “surprisingly quiet, uneventful election period.” Facebook implemented two “break glass measures” to stop misinformation and took down over 65,000 pieces of content for violating the platform’s voter suppression policies, according to the note. But researchers also noted some gaps, including on Instagram, which didn’t have a misinformation reporting category at the time and was not supported by Facebook’s fact-checking tool. Moreover, the underlying potential for Facebook’s platforms to cause real-world division and harm in India predated the election and continued long after — as did internal concerns about it. One February 2019 research note, titled “An Indian Test User’s Descent Into a Sea of Polarizing, Nationalistic Messages” detailed a test account set up by Facebook researchers that followed the company’s recommended pages and groups. Within three weeks, the account’s feed became filled with “a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”Many of the groups had benign names but researchers said they began sharing harmful content and misinformation, particularly against citizens of India’s neighbor and rival Pakistan, after a February 14 terror attack in the disputed Kashmir region between the two countries. “I’ve seen more images of dead people in the past 3 weeks than I’ve seen in my entire life total,” one of the researchers wrote. Facebook’s approach to hate speech in India has been controversial even among its own employees in the country. In August 2020, a Journal report alleged Facebook had failed to take action on hate speech posts by a member of India’s ruling party, leading to demands for change among many of its employees. (The company told the Journal at the time that its leaders are “against anti-Muslim hate and bigotry and welcome the opportunity to continue the conversation on these issues.”) In an internal comment thread days after the initial report, several of the company’s workers questioned, in part, its inaction on politicians sharing misinformation and hate speech.”As there are a limited number of politicians, I find it inconceivable that we don’t have even basic key word detection set up to catch this sort of thing,” one employee commented. “After all cannot be proud as a company if we continue to let such barbarism flourish on our network.”

Both companies reported revenue results on Thursday that fell short of Wall Street analysts’ expectations and warned that supply chain issues could weigh on business in the December quarter.Amazon missed Wall Street projections for both sales and profit for the three months ended September 30 — a rare miss for the internet giant. It posted net sales of $110.8 billion, up 15% from the same period a year earlier, but below analyst projections of $111.6 billion. Net income for the quarter decreased from the prior year to $3.2 billion, well short of the $4.6 billion analysts expected. Amazon CEO Andy Jassy warned in a statement that, in the upcoming fourth quarter, the company’s consumer business expects to incur several billion dollars of additional costs. Those costs, he said, come “as we manage through labor supply shortages, increased wage costs, global supply chain issues, and increased freight and shipping costs — all while doing whatever it takes to minimize the impact on customers and selling partners this holiday season.” Apple posted quarterly sales of $83.4 billion, slightly lower than analysts had anticipated. iPhone sales were lower than analyst forecasts, too, coming in at $38.9 billion.In a conference call with analysts after reporting the results, CEO Tim Cook focused on the fact that Apple managed to post a quarterly sales record despite the supply constraints. “Demand was very robust,” he said, but he also noted that “larger-than-expected supply constraints,” including silicon shortages and a “related manufacturing disruption,” had a $6 billion negative impact on the business. Amazon’s (AMZN) stock fell as much as 5% and Apple (AAPL) shares fell more than 4% in after-hours trading Thursday.Supply chain disruptions and staffing issues caused by the pandemic have escalated in recent months, hitting a wide range of industries. Several retailers, manufacturers and economists have warned that global supply chain constraints will lead to not only fewer discounts during the holidays this year but also result in a potential dearth of products on store shelves.Apple has built up a sophisticated supply chain over the years for its various hardware products, and Amazon has developed an advanced logistics operation for deliveries. The supply concerns are also dragging into important periods for both companies: for Amazon, the all-important holiday shopping season, and for Apple, the launch of several new products, including its iPhone 13 lineup.Amazon previously warned that the second half of 2021 could bring slower growth compared to last year because more people were returning to in-person shopping versus online ordering as vaccines rolled out. And things don’t appear to be looking up just yet. Amazon is now projecting much slower-than-usual growth for the final three months of the year.Apple declined to provide revenue guidance for the December quarter, “given the continued uncertainty around the world in the near term,” CFO Luca Maestri said during the company’s earnings call Thursday. “We estimate the impact from supply constraints will be larger during the December quarter. Despite this challenge, we are seeing high demand for our products,” he said.

If the board approves, Kotick will be paid $62,500, he said — a sharp drop from the $155 million pay package approved by shareholders in June.”I am asking not to receive any bonuses or be granted any equity during this time,” he added. The announcement was part of a broader set of changes Kotick — who has been Activision CEO since 1991, including the 2008 merger with Blizzard — said the company is making. The changes include an end to forced arbitration of sexual harassment and discrimination claims, a 50% increase in the percentage of women and non-binary people in the company and a “zero tolerance harassment policy.”Kotick’s letter comes after months of turmoil within Activision Blizzard, which owns hugely popular titles such as “Call of Duty,” “World of Warcraft” and “Candy Crush,” over allegations of sexual harassment and gender pay disparity, among other issues. A lawsuit filed in July by California’s Department of Fair Employment and Housing alleged a “frat boy” work culture where women were subjected to constant discrimination and harassment. (The company told CNN at the time that it had addressed past misconduct and criticized the lawsuit as “inaccurate” and “distorted.”)The lawsuit and the company’s initial response kicked off a storm of dissent from Activision Blizzard’s workforce that ultimately led to hundreds of employees staging a walkout at the company’s offices in Irvine, California. Kotick subsequently acknowledged that the company’s initial response was “tone deaf.”The company is also facing a complaint from the National Labor Relations Board filed earlier this month accusing it of unfair labor practices, as well as an investigation by the Securities and Exchange Commission that the company has said it is cooperating with. Those actions are all still pending, and Activision Blizzard said it “continues to productively engage with regulators.”A month ago, the company paid $18 million to settle a separate lawsuit by the Equal Employment Opportunity Commission (EEOC) that accused it of subjecting female employees to sexual harassment, retaliating against them for complaining about harassment and paying female employees less than male employees. The company also “discriminated against employees due to their pregnancy,” the EEOC alleged. In a statement accompanying the EEOC settlement announcement, Kotick said he remained “unwavering in my commitment to make Activision Blizzard one of the world’s most inclusive, respected, and respectful workplaces.”

“I know that some people will say that this isn’t a time to focus on the future, and I want to acknowledge that there are important issues to work on in the present. There always will be,” Zuckerberg said in a kickoff video ahead of his keynote. “So for many people, I’m just not sure there ever will be a good time to focus on the future. But I also know that there are a lot of you who feel the same way that I do.” “We live for what we’re building,” Zuckerberg added. “And while we make mistakes, we keep learning and building and moving forward.”Facebook livestreamed its eighth annual Connect event to unveil its latest innovations and research in augmented and virtual reality, and more on its vision for the metaverse, which refers to efforts that combine virtual and augmented reality technologies in a new online platform.Facebook showed a series of concept videos that highlighted its vision for metaverse, such as sending a holographic image of yourself to a concert with a friend attending in real life, sitting around virtual meeting tables with remote colleagues or playing immersive games with friends.Zuckerberg also announced Messenger calling is coming to VR, plans to operate a virtual marketplace where developers can sell virtual goods and a new home screen in Oculus Quest to make chatting and games in the virtual world more social.”Your devices won’t be the focal point of your attention anymore,” he said. “We’re starting to see a lot of these technologies coming together in the next five or 10 years. A lot of this is going to be mainstream and a lot of us will be creating and inhabiting worlds that are just as detailed and convincing as this one, on a daily basis.”The company is also expected to reorganize its business under a new name and possibly change the title of its cofounder, Mark Zuckerberg, who will kick the event off with a keynote. The change could demote Facebook to just one of the company’s three major platforms — which also includes Instagram and WhatsApp — rather than the overarching brand. The move would reflect the company’s mission to be known for more than social media and align with its growing focus on the metaverse. The Verge first reported the potential name change last week.A rebranding could also be part of an effort to overhaul Facebook’s reputation and turn the page following a series of PR nightmares, including misinformation on its platforms, content moderation failures and revelations about the negative effects its products have on some users’ mental health.On an earnings call in July, Zuckerberg teased the concept of the metaverse as a space similar to the internet where users (via digital avatars) can walk around and interact with one another in real time. ​​”In the coming years, I expect people will transition from seeing us primarily as a social media company to seeing us as a metaverse company,” he said on the call.

In a blog post published Wednesday, the company said it is rolling out a tool that lets parents and kids under the age of 18 request photos be removed from its images tab or no longer appear as thumbnails in a search inquiry. Although Google (GOOG) previously offered ways for people to request the removal of personal information and photos that fit into categories such as “non-consensual explicit” or “financial, medical and national ID,” it’s now extending this to images of minors.”We know that kids and teens have to navigate some unique challenges online, especially when a picture of them is unexpectedly available on the internet,” the company said in the blog post. “We believe this change will help give young people more control over their digital footprint and where their images can be found on Search.”The new form allows users to flag URLs of any images or search results that contain pictures they want removed. Google said its teams will review each submission and reach out if they need additional information to verify the requirements for removal. However, the company emphasized this won’t remove the image from the internet entirely; people will need to contact a website’s webmaster to ask for that content to be removed. The company previously announced the tool in August as part of a bigger effort to protect minors across its platforms. Other features it introduced at the time included a private default setting for all videos uploaded by a teenager and a tool called Family Link that helps parents monitor their kids’ accounts.The efforts come as Big Tech companies continue to offer more child safety measures amid criticism from experts and lawmakers about how various platforms impact young users. Earlier this week, an executive from Google-owned YouTube — alongside leaders from Snap and TikTok — was grilled by Senators about the steps the platform is taking to protect their young users.Some experts applauded Google’s latest move to give minors more control over images, noting their removal could also cut down on cyber-bullying or prevent potentially harmful information or photos from persisting online. “We’re glad to see Google take this overdue step to give children and teens and their families more control over what images show up in search results,” said David Monahan, campaign manager at Fairplay, a child advocacy group. “We hope Google will go farther to reverse its collection of sensitive data and give families the ability to erase the digital footprint that Google and its partners maintain on every young person in the US.”Alexandra Hamlet, a clinical psychologist who works with teenagers, said Google’s request process could also help parents talk more openly with their kids about managing their online presence. That could include discussing what’s worthy of consideration for removal, such as a photo that could harm their future reputation versus one where they perceive to look less than “perfect.””While some parents may believe that their teen can handle the removal of various pictures without help, I do suggest that they still have conversations about values and how they tie into image online,” she said. “They could be missing out on a great opportunity to help their teen to build insight and assertiveness skills.”