zuloogood.blogg.se

Candela medical
Candela medical







candela medical

Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. More important, I realized, if it tried to, it would be set up for failure.

candela medical

An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.īy the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. Fixing this problem, to me, seemed like core Responsible AI territory.īut Entin and Quiñonero had a different agenda. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. He seemed a natural choice of subject to me, too. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.

candela medical

After talking to several of its AI leaders, I decided to focus on Quiñonero. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. In the spring of 2020, it was apparently my turn. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”

Candela medical series#

In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. It regularly trots out various leaders to speak to the media about the ongoing reforms. Now his mandate would be to make them less harmful.įacebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. Quiñonero was a natural pick for the job.









Candela medical