Loading...

Follow TechCrunch » Social on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Tech ethics can mean a lot of different things, but surely one of the most critical, unavoidable, and yet somehow still controversial propositions in the emerging field of ethics in technology is that tech should promote gender equality. But does it? And to the extent it does not, what (and who) needs to change?

In this second of a two-part interview “On The Internet of Women,” Harvard fellow and Logic magazine founder and editor Moira Weigel and I discuss the future of capitalism and its relationship to sex and tech; the place of ambivalence in feminist ethics; and Moira’s personal experiences with #MeToo.

Greg E.: There’s a relationship between technology and feminism, and technology and sexism for that matter. Then there’s a relationship between all of those things and capitalism. One of the underlying themes in your essay “The Internet of Women,” that I thought made it such a kind of, I’d call it a seminal essay, but that would be a silly term to use in this case…

Moira W.: I’ll take it.

Greg E.: One of the reasons I thought your essay should be required reading basic reading in tech ethics is that you argue we need to examine the degree to which sexism is a part of capitalism.

Moira W.: Yes.

Greg E.: Talk about that.

Moira W.: This is a big topic! Where to begin?

Capitalism, the social and economic system that emerged in Europe around the sixteenth century and that we still live under, has a profound relationship to histories of sexism and racism. It’s really important to recognize that sexism and racism themselves are historical phenomena.

They don’t exist in the same way in all places. They take on different forms at different times. I find that very hopeful to recognize, because it means they can change.

It’s really important not to get too pulled into the view that men have always hated women there will always be this war of the sexes that, best case scenario, gets temporarily resolved in the depressing truce of conventional heterosexuality.  The conditions we live under are not the only possible conditions—they are not inevitable.

A fundamental Marxist insight is that capitalism necessarily involves exploitation. In order to grow, a company needs to pay people less for their work than that work is worth. Race and gender help make this process of exploitation seem natural.

Image via Getty Images / gremlin

Certain people are naturally inclined to do certain kinds of lower status and lower waged work, and why should anyone be paid much to do what comes naturally? And it just so happens that the kinds of work we value less are seen as more naturally “female.” This isn’t just about caring professions that have been coded female—nursing and teaching and so on, although it does include those.

In fact, the history of computer programming provides one of the best examples. In the early decades, when writing software was seen as rote work and lower status, it was mostly done by women. As Mar Hicks and other historians have shown, as the profession became more prestigious and more lucrative, women were very actively pushed out.

You even see this with specific coding languages. As more women learn, say, Javascript, it becomes seen as feminized—seen as less impressive or valuable than Python, a “softer” skill. This perception, that women have certain natural capacities that should be free or cheap, has a long history that overlaps with the history of capitalism.  At some level, it is a byproduct of the rise of wage labor.

To a medieval farmer it would have made no sense to say that when his wife had their children who worked their farm, gave birth to them in labor, killed the chickens and cooked them, or did work around the house, that that wasn’t “work,” [but when he] took the chickens to the market to sell them, that was. Right?

A long line of feminist thinkers has drawn attention to this in different ways. One slogan from the 70s was, ‘whose work produces the worker?’ Women, but neither companies nor the state, who profit from this process, expect to pay for it.

Why am I saying all this? My point is: race and gender have been very useful historically for getting capitalism things for free—and for justifying that process. Of course, they’re also very useful for dividing exploited people against one another. So that a white male worker hates his black coworker, or his leeching wife, rather than his boss.

Greg E.: I want to ask more about this topic and technology; you are a publisher of Logic magazine which is one of the most interesting publications about technology that has come on the scene in the last few years.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Facebook’s former teen-in-residence Michael Sayman, now at Google, is back today with the launch of a new game: Emojishot, an emoji-based guessing game for iOS, built over the past ten weeks within Google’s in-house incubator, Area 120.

The game, which is basically a version of charades using emoji characters, is notable because of its creator.

By age 17, Sayman had launched five apps and had become Facebook’s youngest-ever employee. Best known for his hit game 4 Snaps, the developer caught Mark Zuckerberg’s eye, earning him a demo spot on stage at Facebook’s F8 conference. While at Facebook, Sayman built Facebook’s teen app Lifestage — a Snapchat-like standalone project which allowed the company to explore new concepts around social networking aimed at a younger demographic.

Lifestage was shut down two years ago, and Sayman defected to Google shortly afterward. At Google, he was rumored to be heading up an internal social gaming effort called Arcade where gamers played using accounts tied to their phone numbers — not a social network account.

At the time, HQ Trivia was still a hot title, not a novelty from a struggling startup — and the new gaming effort looked liked Google’s response. However, Arcade has always been only an Area 120 project, we understand.

To be clear, that means it’s not an official Google effort — as an Area 120 project, it’s not associated with any of Google’s broader efforts in gaming, social or anything else. Area 120 apps and services are instead built by small teams who are personally interested in pursuing an idea. In the case of Emojishot, it was Sayman’s own passion project.

Emojishot itself is meant to be played with friends, who take turns using emoji to create a picture so friends can guess the word. For example, the game’s screenshots show the word “kraken” may be drawn using an octopus, boat and arrow emojis. The emojis are selected from a keyboard below and can be resized to create the picture. This resulting picture is called the “emojishot,” and can also be saved to your Camera Roll.

Players can pick from a variety of words that unlock and get increasingly difficult as you successfully progress through the game. The puzzles can also be shared with friends to get help with solving, and there’s a “nudge” feature to encourage a friend to return to the game and play.

According to the game’s website, the idea was to make a fun game that explored emojis as art and a form of communication.

Unfortunately, we were unable to test it just yet, as the service wasn’t up-and-running at the time of publication. (The game is just now rolling out so it may not be fully functional until later today).

While there are other “Emoji Charades” games on the App Store, the current leading title is aimed at playing with friends at a party on the living room TV, not on phones with friends.

Sayman officially announced Emojishot today, noting his efforts at Area 120 and how the game came about.

“For the last year, I’ve been working in Area 120, Google’s workshop for experimental products. I’ve been exploring and rapidly prototyping a bunch of ideas, testing both internally and externally,” he says. “Ten weeks ago, we came up with the idea for an emoji-based guessing game. After a lot of testing and riffing on the idea, we’re excited that the first iteration — Emojishot — is now live on the iOS App Store…We’ve had a lot of fun with it and are excited to open it up to a wider audience,” Sayman added.

He notes that more improvements to the game will come over time, and offered to play with newcomers via his username “michael.”

The app is available to download from the U.S. iOS App Store here. An Android waitlist is here.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

One of the bigger developments in customer services has been the impact of social media — both as a place to vent frustration or praise (mostly frustration), and — especially over messaging apps — as a place for businesses to connect with their users.

Now, customer support specialist Zendesk has made an acquisition so that it can make a bigger move into how it works within social media platforms, and specifically messaging apps: it has acquired Smooch, a startup that describes itself as an “omnichannel messaging platform,” which companies’ customer care teams can use to interact with people over messaging platforms like WhatsApp, WeChat, Line and Messenger, as well as SMS and email.

Smooch was in fact one of the first partners for the WhatsApp Business API, alongside VoiceSageNexmoInfobip, Twilio, MessageBird and others are already advertising their services in this area.

It had also been a longtime partner of Zendesk’s, powering the company’s own WhatsApp Business integration and other features. The two already have some customers in common, including Uber. Other Smooch customers include Four Seasons, SXSW, Betterment, Clarabridge, Harry’s, LVMH, Delivery Hero and BarkBox.

Terms of the deal are not being disclosed, but Zendesk SVP Shawna Wolverton said in an interview that that the startup’s entire team of 48, led by co-founder and CEO Warren Levitan, are being offered positions with Zendesk. Smooch is based out of Montreal, Canada — so this represents an expansion for Zendesk into building an office in Canada.

Its backers included iNovia, TA Associates and Real Ventures, who collectively had backed it with less than $10 million (when you leave in inflated hills surrounding Silicon Valley, numbers magically decline). As Zendesk is publicly traded, we may get more of a picture of the price in future quarterly reports. This is the company’s fifth acquisition to date.

The deal underscores the big impact that messaging apps are making in customer service. While phone and internet are massive points of contact, messaging apps is one of the most-requested features Zendesk’s customers are asking for, “because they want to be where their customers are,” with WhatsApp — now at 1.5 billion users — currently at the top of the pile, Wolverton said. (More than half of Zendesk’s revenues are from outside the US, which speaks to why WhatsApp — which is bigger outside the US than it is in it — is a popular request.)

That’s partly a by-product of how popular messaging apps are full-stop, with more than 75 percent of all smartphone users having at least one messaging app in use on their devices.

“We live in a messaging-centric world, and customers expect the convenience and interactivity of messaging to be part of their experiences,” said Mikkel Svane, Zendesk founder, CEO and chairman, in a statement. “As long-time partners with Smooch, we know first hand how much they have advanced the conversational experience to bring together all forms of messaging and create a continuous conversation between customers and businesses.”

While the two companies were already working together, the acquisition will mean a closer integration.

That will be in multiple areas. Last year, Zendesk launched a new CRM play called Sunshine, going head to head with the likes of Salesforce in helping businesses better organise and make use of customer data. Smooch will build on that strategy to bring in data to Sunshine from messaging apps and the interactions that take place on them. Also last year, Zendesk launched an omnichannel play, a platform called The Suite, which it says “has become one of our most successful products ever,” with a 400 percent rise in its customers taking an omnichannel approach. Smooch already forms a key part of that, and it will be even more tightly so.

On the outbound side, for now, there will be two areas where Smooch will be used, Wolverton said. First will be on the basic level of giving Zendesk users the ability to see and create messaging app discussions within a dashboard where they are able to monitor and handle all customer relationship contacts: a conversation that was inititated now on, say, Twitter, can be easily moved into WhatsApp or whatever more direct channel someone wants to use.

Second, Wolverton said that customer care workers can use Smooch to send on “micro apps” to users to handle routine service enquiries, for example sending them links to make or change seat assignments on a flight.

Over time, the plan will be to bring in more automated options into the experience, which opens the door for using more AI and potentially bots down the line.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Indonesia is the latest nation to hit the hammer on social media after the government restricted the use of WhatsApp and Instagram following deadly riots yesterday.

Numerous Indonesia-based users are today reporting difficulties sending multimedia messages via WhatsApp, which is one of the country’s most popular chat apps, and posting content to Facebook, while the hashtag #instagramdown is trending among the country’s Twitter users due to problems accessing the Facebook-owned photo app.

Wiranto, a coordinating minister for political, legal and security affairs, confirmed in a press conference that the government is limiting access to social media and “deactivating certain features” to maintain calm, according to a report from Coconuts.

Rudiantara, the communications minister of Indonesia and a critic of Facebook, explained that users “will experience lag on Whatsapp if you upload videos and photos.”

Facebook — which operates both WhatsApp and Instagram — didn’t explicitly confirm the blockages , but it did say it has been in communication with the Indonesian government.

“We are aware of the ongoing security situation in Jakarta and have been responsive to the Government of Indonesia. We are committed to maintaining all of our services for people who rely on them to communicate with their loved ones and access vital information,” a spokesperson told TechCrunch.

A number of Indonesia-based WhatsApp users confirmed to TechCrunch that they are unable to send photos, videos and voice messages through the service. Those restrictions are lifted when using Wi-Fi or mobile data services through a VPN, the people confirmed.

The restrictions come as Indonesia grapples with political tension following the release of the results of its presidential election on Tuesday. Defeated candidate Prabowo Subianto said he will challenge the result in the constitutional court.

Riots broke out in capital state Jakarta last night, killing at least six people and leaving more than 200 people injured. Following this, it is alleged that misleading information and hoaxes about the nature of riots and people who participated in them began to spread on social media services, according to local media reports.

Protesters hurl rocks during clash with police in Jakarta on May 22, 2019. – Indonesian police said on May 22 they were probing reports that at least one demonstrator was killed in clashes that broke out in the capital Jakarta overnight after a rally opposed to President Joko Widodo’s re-election. (Photo by ADEK BERRY / AFP)

For Facebook, seeing its services forcefully cut off in a region is no longer a rare incident. The company, which is grappling with the spread of false information in many markets, faced a similar restriction in Sri Lanka in April, when the service was completely banned for days amid terrorist strikes in the nation. India, which just this week concluded its general election, has expressed concerns over Facebook’s inability to contain the spread of false information on WhatsApp, which is its largest chat app with over 200 million monthly users.

Indonesia’s Rudiantara expressed a similar concern earlier this month.

“Facebook can tell you, ‘We are in compliance with the government’. I can tell you how much content we requested to be taken down and how much of it they took down. Facebook is the worst,” he told a House of Representatives Commission last week, according to the Jakarta Post.

Update 05/22 02:30 PDT: The original version of this post has been updated to reflect that usage of Facebook in Indonesia has also been impacted.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A multi-month hunt for political disinformation spreading on Facebook in Europe suggests there are concerted efforts to use the platform to spread bogus far right propaganda to millions of voters ahead of a key EU vote which kicks off tomorrow.

Following the independent investigation, Facebook has taken down a total of 77 pages and 230 accounts from Germany, UK, France, Italy, Spain and Poland — which had been followed by an estimated 32 million people and generated 67 million ‘interactions’ (i.e. comments, likes, shares) in the last three months alone.

The bogus mainly far-right disinformation networks were not identified by Facebook — but had been reported to it by campaign group Avaaz — which says the fake pages had more Facebook followers and interactions than all the main EU far right and anti-EU parties combined.

“The results are overwhelming: the disinformation networks upon which Facebook acted had more interactions (13 million) in the past three months than the main party pages of the League, AfD, VOX, Brexit Party, Rassemblement National and PiS combined (9 million),” it writes in a new report.

“Although interactions is the figure that best illustrates the impact and reach of these networks, comparing the number of followers of the networks taken down reveals an even clearer image. The Facebook networks takedown had almost three times (5.9 million) the number of followers as AfD, VOX, Brexit Party, Rassemblement National and PiS’s main Facebook pages combined (2 million).”

Avaaz has previously found and announced far right disinformation networks operating in Spain, Italy and Poland — and a spokesman confirmed to us it’s re-reporting some of its findings now (such as the ~30 pages and groups in Spain that had racked up 1.7M followers and 7.4M interactions, which we covered last month) to highlight an overall total for the investigation.

“Our report contains new information for France, United Kingdom and Germany,” the spokesman added.

Examples of politically charged disinformation being spread via Facebook by the bogus networks it found include a fake viral video seen by 10 million people that supposedly shows migrants in Italy destroying a police car (but was actually from a movie; which Avaaz adds that this fake had been “debunked years ago”); a story in Poland claiming that migrant taxi drivers rape European women, including a fake image; and fake news about a child cancer center being closed down by Catalan separatists in Spain.

There’s lots more country-specific detail in its full report.

In all, Avaaz reported more than 500 suspicious pages and groups to Facebook related to the three-month investigation of Facebook disinformation networks in Europe. Though Facebook only took down a subset of the far right muck-spreaders — around 15% of the suspicious pages reported to it.

“The networks were either spreading disinformation or using tactics to amplify their mainly anti-immigration, anti-EU, or racist content, in a way that appears to breach Facebook’s own policies,” Avaaz writes of what it found.

It estimates that content posted by all the suspicious pages it reported had been viewed some 533 million times over the pre-election period. Albeit, there’s no way to know whether or not everything it judged suspicious actually was.

In a statement responding to Avaaz’s findings, Facebook told us:

We thank Avaaz for sharing their research for us to investigate. As we have said, we are focused on protecting the integrity of elections across the European Union and around the world. We have removed a number of fake and duplicate accounts that were violating our authenticity policies, as well as multiple Pages for name change and other violations. We also took action against some additional Pages that repeatedly posted misinformation. We will take further action if we find additional violations.

The company did not respond to our question asking why it failed to unearth this political disinformation itself.

Ahead of the EU parliament vote, which begins tomorrow, Facebook invited a select group of journalists to tour a new Dublin-based election security ‘war room’ — where it talked about a “five pillars of countering disinformation” strategy to prevent cynical attempts to manipulate voters’ views.

But as Avaaz’s investigation shows there’s plenty of political disinformation flying by entirely unchecked.

One major ongoing issue where political disinformation and Facebook’s platform is concerned is that how the company enforces its own rules remains entirely opaque.

We don’t get to see all the detail — so can’t judge and assess all its decisions. Yet Facebook has been known to shut down swathes of accounts deemed fake ahead of elections, while apparently failing entirely to find other fakes (such as in this case).

It’s a situation that does not look compatible with the continued functioning of democracy given Facebook’s massive reach and power to influence.

Nor is the company under an obligation to report every fake account it confirms. Instead, Facebook gets to control the timing and flow of any official announcements it chooses to make about “coordinated inauthentic behaviour” — dropping these self-selected disclosures as and when it sees fit, and making them sound as routine as possible by cloaking them in its standard, dryly worded newspeak.

Back in January, Facebook COO Sheryl Sandberg admitted publicly that the company is blocking more than 1M fake accounts every day. If Facebook was reporting every fake it finds it would therefore need to do so via a real-time dashboard — not sporadic newsroom blog posts that inherently play down the scale of what is clearly embedded into its platform, and may be so massive and ongoing that it’s not really possible to know where Facebook stops and ‘Fakebook’ starts.

The suspicious behaviours that Avaaz attached to the pages and groups it found that appeared to be in breach of Facebook’s stated rules include the use of fake accounts, spamming, misleading page name changes and suspected coordinated inauthentic behavior.

When Avaaz previously reported the Spanish far right networks Facebook subsequently told us it had removed “a number” of pages violating its “authenticity policies”, including one page for name change violations but claimed “we aren’t removing accounts or Pages for coordinated inauthentic behavior”.

So again, it’s worth emphasizing that Facebook gets to define what is and isn’t acceptable on its platform — including creating terms that seek to normalize its own inherently dysfunctional ‘rules’ and their ‘enforcement’.

Such as by creating terms like “coordinated inauthentic behavior”, which sets a threshold of Facebook’s own choosing for what it will and won’t judge political disinformation. It’s inherently self-serving.

Given that Facebook only acted on a small proportion of what Avaaz found and reported overall, we might posit that the company is setting a very high bar for acting against suspicious activity. And that plenty of election fiddling is free flowing under its feeble radar. (When we previously asked Facebook whether it was disputing Avaaz’s finding of coordinated inauthentic behaviour vis-a-vis the far right disinformation networks it reported in Spain the company did not respond to the question.)

Much of the publicity around Facebook’s self-styled “election security” efforts has also focused on how it’s enforcing new disclosure rules around political ads. But again political disinformation masquerading as organic content continues being spread across its platform — where it’s being shown to be racking up millions of interactions with people’s brains and eyeballs.

Plus, as we reported yesterday, research conducted by the Oxford Internet Institute into pre-EU election content sharing on Facebook has found that sources of disinformation-spreading ‘junk news’ generate far greater engagement on its platform than professional journalism.

So while Facebook’s platform is also clearly full of real people sharing actual news and views, the fake BS which Avaaz’s findings imply is also flooding the platform, gets spread around more, on a per unit basis. And it’s democracy that suffers — because vote manipulators are able to pass off manipulative propaganda and hate speech as bona fide news and views as a consequence of Facebook publishing the fake stuff alongside genuine opinions and professional journalism.

It does not have algorithms that can perfectly distinguish one from the other, and has suggested it never will.

The bottom line is that even if Facebook dedicates far more resource (human and AI) to rooting out ‘election interference’ the wider problem is that a commercial entity which benefits from engagement on an ad-funded platform is also the referee setting the rules.

Indeed, the whole loud Facebook publicity effort around “election security” looks like a cynical attempt to distract the rest of us from how broken its rules are. Or, in other words, a platform that accelerates propaganda is also seeking to manipulate and skew our views.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Millions of Instagram influencers had their private contact data scraped and exposed

A massive database containing contact information for millions of Instagram influencers, celebrities and brand accounts was found online by a security researcher.

We traced the database back to Mumbai-based social media marketing firm Chtrbox. Shortly after we reached out, Chtrbox pulled the database offline.

2. US mitigates Huawei ban by offering temporary reprieve

Last week, the Trump administration effectively banned Huawei from importing U.S. technology, a decision that forced several American companies, including Google, to take steps to sever their relationships. Now, the Department of Commerce has announced that Huawei will receive a “90-day temporary general license” to continue to use U.S. technology to which it already has a license.

3. GM’s car-sharing service Maven to exit eight cities

GM is scaling back its Maven car-sharing company and will stop service in nearly half of the 17 North American cities in which it operates.

4. Maisie Williams’ talent discovery startup Daisie raises $2.5M, hits 100K members

The actress who became famous playing Arya Stark on “Game of Thrones” has fresh funding for her startup.

5. ByteDance, TikTok’s parent company, plans to launch a free music streaming app

The company, which operates popular app TikTok, has held discussions with music labels to launch the app as soon as the end of this quarter.

6. Future Family launches a $200 membership for fertility coaching

In its recent user research, Future Family found that around 70% of new customers had yet to see a fertility doctor. So today, the startup is rolling out a new membership plan that offers customers a dedicated fertility coach, and helps them find a doctor in their area.

7. When will customers start buying all those AI chips?

Danny Crichton says it’s the best and worst time to be in semiconductors right now. (Extra Crunch membership required.)

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A study carried out by academics at Oxford University to investigate how junk news is being shared on social media in Europe ahead of regional elections this month has found individual stories shared on Facebook’s platform can still hugely outperform the most important and professionally produced news stories, drawing as much as 4x the volume of Facebook shares, likes, and comments.

The study, conducted for the Oxford Internet Institute’s (OII) Computational Propaganda Project, is intended to respond to widespread concern about the spread of online political disinformation on EU elections which take place later this month, by examining pre-election chatter on Facebook and Twitter in English, French, German, Italian, Polish, Spanish, and Swedish.

Junk news in this context refers to content produced by known sources of political misinformation — aka outlets that are systematically producing and spreading “ideologically extreme, misleading, and factually incorrect information” — with the researchers comparing interactions with junk stories from such outlets to news stories produced by the most popular professional news sources to get a snapshot of public engagement with sources of misinformation ahead of the EU vote.

As we reported last year, the Institute also launched a junk news aggregator ahead of the US midterms to help Internet users get a handle on manipulative politically-charged content that might be hitting their feeds.

In the EU the European Commission has responded to rising concern about the impact of online disinformation on democratic processes by stepping up pressure on platforms and the adtech industry — issuing monthly progress reports since January after the introduction of a voluntary code of practice last year intended to encourage action to squeeze the spread of manipulative fakes. Albeit, so far these ‘progress’ reports have mostly boiled down to calls for less foot-dragging and more action.

One tangible result last month was Twitter introducing a report option for misleading tweets related to voting ahead of the EU vote, though again you have to wonder what took it so long given that online election interference is hardly a new revelation. (The OII study is also just the latest piece of research to bolster the age old maxim that falsehoods fly and the truth comes limping after.)

The study also examined how junk news spread on Twitter during the pre-EU election period, with the researchers finding that less than 4% of sources circulating on Twitter’s platform were junk news (or “known Russian sources”) — with Twitter users sharing far more links to mainstream news outlets overall (34%) over the study period.

Although the Polish language sphere was an exception — with junk news making up a fifth (21%) of EU election-related Twitter traffic in that outlying case.

Returning to Facebook, while the researchers do note that many more users interact with mainstream content overall via its platform, noting that mainstream publishers have a higher following and so “wider access to drive activity around their content” and meaning their stories “tend to be seen, liked, and shared by far more users overall”, they also point out that junk news still packs a greater per story punch — likely owing to the use of tactics such as clickbait, emotive language, and outragemongering in headlines which continues to be shown to generate more clicks and engagement on social media.

It’s also of course much quicker and easier to make some shit up vs the slower pace of doing rigorous professional journalism — so junk news purveyors can get out ahead of news events also as an eyeball-grabbing strategy to further the spread of their cynical BS. (And indeed the researchers go on to say that most of the junk news sources being shared during the pre-election period “either sensationalized or spun political and social events covered by mainstream media sources to serve a political and ideological agenda”.)

“While junk news sites were less prolific publishers than professional news producers, their stories tend to be much more engaging,” they write in a data memo covering the study. “Indeed, in five out of the seven languages (English, French, German, Spanish, and Swedish), individual stories from popular junk news outlets received on average between 1.2 to 4 times as many likes, comments, and shares than stories from professional media sources.

“In the German sphere, for instance, interactions with mainstream stories averaged only 315 (the lowest across this sub-sample) while nearing 1,973 for equivalent junk news stories.”

To conduct the research the academics gathered more than 584,000 tweets related to the European parliamentary elections from more than 187,000 unique users between April 5 and April 20 using election-related hashtags — from which they extracted more than 137,000 tweets containing a URL link, which pointed to a total of 5,774 unique media sources.

Sources that were shared 5x or more across the collection period were manually classified by a team of nine multi-lingual coders based on what they describe as “a rigorous grounded typology developed and refined through the project’s previous studies of eight elections in several countries around the world”.

Each media source was coded individually by two separate coders, via which technique they say was able to successfully label nearly 91% of all links shared during the study period. 

The five most popular junk news sources were extracted from each language sphere looked at — with the researchers then measuring the volume of Facebook interactions with these outlets between April 5 and May 5, using the NewsWhip Analytics dashboard.

They also conducted a thematic analysis of the 20 most engaging junk news stories on Facebook during the data collection period to gain a better understanding of the different political narratives favoured by junk news outlets ahead of an election.

On the latter front they say the most engaging junk narratives over the study period “tend to revolve around populist themes such as anti-immigration and Islamophobic sentiment, with few expressing Euroscepticism or directly mentioning European leaders or parties”.

Which suggests that EU-level political disinformation is a more issue-focused animal (and/or less developed) — vs the kind of personal attacks that have been normalized in US politics (and were richly and infamously exploited by Kremlin-backed anti-Clinton political disinformation during the 2016 US presidential election, for example).

This is likely also because of a lower level of political awareness attached to individuals involved in EU institutions and politics, and the multi-national state nature of the pan-EU project — which inevitably bakes in far greater diversity. (We can posit that just as it aids robustness in biological life, diversity appears to bolster democratic resilience vs political nonsense.)

The researchers also say they identified two noticeable patterns in the thematic content of junk stories that sought to cynically spin political or social news events for political gain over the pre-election study period.

“Out of the twenty stories we analysed, 9 featured explicit mentions of ‘Muslims’ and the Islamic faith in general, while seven mentioned ‘migrants’, ‘immigration’, or ‘refugees’… In seven instances, mentions of Muslims and immigrants were coupled with reporting on terrorism or violent crime, including sexual assault and honour killings,” they write.

“Several stories also mentioned the Notre Dame fire, some propagating the idea that the arson had been deliberately plotted by Islamist terrorists, for example, or suggesting that the French government’s reconstruction plans for the cathedral would include a minaret. In contrast, only 4 stories featured Euroscepticism or direct mention of European Union leaders and parties.

“The ones that did either turned a specific political figure into one of derision – such as Arnoud van Doorn, former member of PVV, the Dutch nationalist and far-right party of Geert Wilders, who converted to Islam in 2012 – or revolved around domestic politics. One such story relayed allegations that Emmanuel Macron had been using public taxes to finance ISIS jihadists in Syrian camps, while another highlighted an offer by Vladimir Putin to provide financial assistance to rebuild Notre Dame.”

Taken together, the researchers conclude that “individuals discussing politics on social media ahead of the European parliamentary elections shared links to high-quality news content, including high volumes of content produced by independent citizen, civic groups and civil society organizations, compared to other elections we monitored in France, Sweden, and Germany”.

Which suggests that attempts to manipulate the pan-EU election are either less prolific or, well, less successful than those which have targeted some recent national elections in EU Member States. And logic would suggest that co-ordinating election interference across a 28-Member State bloc does require greater co-ordination and resource vs trying to meddle in a single national election — on account of the multiple countries, cultures, languages and issues involved.

We’ve reached out to Facebook for comment on the study’s findings.

The company has put a heavy focus on publicizing its self-styled ‘election security’ efforts ahead of the EU election. Though it has mostly focused on setting up systems to control political ads — whereas junk news purveyors are simply uploading regular Facebook ‘content’ at the same time as wrapping it in bogus claims of ‘journalism’ — none of which Facebook objects to. All of which allows would-be election manipulators to pass off junk views as online news, leveraging the reach of Facebook’s platform and its attention-hogging algorithms to amplify hateful nonsense. While any increase in engagement is a win for Facebook’s ad business, so er…

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Snap has another appointment in the apt saga of its ephemeral CFOs.

Four months after losing its CFO Tim Stone following a reported “personality clash” between Stone and CEO Evan Spiegel, Snap has promoted its VP of Finance Derek Andersen to the role, the company said Monday. Andersen is the company’s third CFO since March of 2017, when it went public.

Lara Sweet, who was serving as the company’s interim CFO as well as the chief accounting officer, will be stepping into a new role as chief people officer.

Snap has had a less cataclysmic 2019 in the public markets compared to its two previous calendar years. Snap has nearly doubled its share price since the year’s start, though the stock still sits just above where it was one year ago.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Maisie Williams’ time on Game of Thrones may have come to an end, but her talent discovery app Daisie is just getting started. Co-founded by film producer Dom Santry, Daisie aims to make it easier for creators to showcase their work, discover projects and collaborate with one another through a social networking-style platform. Only 11 days after Daisie officially launched to the public, the app hit an early milestone of 100,000 members. It also recently closed on $2.5 million in seed funding, the company tells TechCrunch.

The round was led by Founders Fund, which contributed $1.5 million. Other investors included 8VC, Kleiner Perkins, and newer VC firm Shrug Capital, from AngelList’s former head of marketing Niv Dror, who also separately invested. To date — including friends and family money and the founders’ own investment — Daisie has raised roughly $3 million.

It will later move toward raising a larger Series A, Santry says.

On Daisie, creators establish a profile as you would on a social network, find and follow other users, then seek out projects based on location, activity, or other factors.

“Whether it’s film, music, photography, art — everything is optimized around looking for collaborators,” explains Santry. “So the projects that are actively open and looking for people to get involved, are the ones we’re really pushing for people to discover and hopefully get involved with,” he says.

The company’s goal to offer an alternative path to talent discovery is a timely one. Today, the creative industry is waking up — as are many others — to the ramifications of the #MeToo and #TimesUp movements. As power-hungry abusers lose their jobs, new ways of working, networking and sourcing talent are taking hold.

As Williams said when she first introduced the app last year, Daisie’s focus is on giving the power back to the creator.

“Instead of [creators] having to market themselves to fit someone else’s idea of what their job would be, they can let their art speak for themselves,” she said at the time.

Maisie Williams Shows Off Her Creator-Centric App Daisie - YouTube

The app was launched into an invite-only beta on iOS last summer, and quickly saw a surge of users. After 37,000 downloads in week one, it crashed.

“We realized that the community was a lot larger than the product we had built, and that scale was something we needed to do properly,” Santry tells TechCrunch.

The team realized there was another problem, too: Once collaborators found each other in Daisie, there wasn’t a clear cut way for them to get in touch with one another as the app had no communication tools or ways to share files built in.

“That journey from concept to production was pretty muddy and quite muddled…so we realized, if we were bringing teams together, we actually wanted to give them a place to work — give them this creative hub…and take their project from concept all the way to production on Daisie,” Santry notes.

With this broader concept in mind, Daisie began fundraising in San Francisco shortly after the beta launch. The round initially closed in October 2018, but was more recently reopened to allow Dror’s investment.

With the additional funding in tow, Daisie has been able to grow its team of five to eighteen, including new hires from Monzo, Deliveroo, BBC, Microsoft, and others — specifically engineers who were familiar with designing apps for scale. Tasked with developing better infrastructure and a more expansive feature set, the team set to work on bringing Daisie to the web.

Nine months later, the new version launched to the public and is stable enough to handle the load. Today, it topped 100,000 users — most of which are in London. However, Daisie is planning to focus on taking its app to other cities including Berlin, New York, and L.A. going forward.

The company has monetization ideas in mind, but the app does not currently generate revenue. However, it’s already fielding inquiries from companies who want Daisie to find them the right talent for their projects.

“We want the best for the creators on the platform, so if that means bringing clients on — and hopefully giving those connectivity opportunities — then we’ll absolutely [go] down those roads,” Santry says.

The app may also serve as a talent pipeline for Maisie Williams’ own Daisy Chain Productions. In fact, Daisie recently ran a campaign called London Creates which connected young, emerging creators with project teams, two of which were headed by Santry’s Daisy Chain Productions co-founders, Williams and Bill Milner.

Now Daisy Chain Productions is going to produce a film from the Daisie collaboration as a result.

Daisie presents: LDN Creates - YouTube

While celebs sometimes do little more than lend their name to projects, Williams was hands-on in terms of getting Daisie off the ground, Santry says. During the first quarter of 2019, she worked on Daisie 9-to-5, he notes. But she has since started another film project and plans to continue to work as an actress, which will limit her day-to-day involvement. Her role now and in the future may be more high-level.

“I think her role is going to become one of, culturally, like: where does Daisie stand? What do we stand for? Who do we work with? What do we represent?” he says. “How do we help creators everywhere? That’s mainly want Maisie wants to make sure Daisie does.”

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview