Which of the following is a concern based on a user taking pictures with a smartphone?

“To FaceApp or not to FaceApp?” That’s the question the internet struggled with last week as it rode a rollercoaster of reactions to the latest social media challenge.

FaceApp kicked off last week with the “internet’s latest viral obsession” — upload your photo and watch yourself age instantaneously. By Wednesday, concerns had surfaced that millions of people may have inadvertently exposed their data to a surveillance state.

While celebrities were lobbing this social media challenge at each other, tech reporters began to remind users of FaceApp’s history. Two years ago when the app launched, coverage centered mostly around the photo-editing app’s quirky filters — including short-lived and offensive ethnicity filters. But talk also circulated around the company headquarters in Russia — a country that has tried to influence U.S. elections through internet hacking and social media trolling.

A quick glance at FaceApp’s terms of service agreement shows the company holds “perpetual, irrevocable” rights over its users’ app-generated photos, i.e. your selfie. And the worries expanded when it became clear FaceApp creates its identity-morphing images by uploading the photos to cloud servers rather than processing the data right there on a person’s phone. Senate Minority Leader Chuck Schumer called for the FBI to investigate the app.

But the hot take pendulum swung back.

A number of news organizations claimed that the Russia connection had been overblown, after FaceApp’s CEO Yaroslav Goncharov told TechCrunch that the app’s photo processing occurs on Amazon and Google cloud platforms, akin to U.S.-based tech platforms.

“Think FaceApp is scary? Wait till you hear about Facebook,” wrote WIRED, alluding to the U.S. tech giant’s own issues with harvesting users’ data. Another take by Vox’s Kaitlyn Tiffany went as far as to claim the fears over FaceApp were tinged with xenophobia and devoid of context, with a number of examples including my own tweets that “FaceApp uploads your photos to Russia.”

All of this made me wonder if the fears about FaceApp were exaggerated. So, along with contacting FaceApp, which did not return our request for comment, I reached out to five experts with backgrounds in areas like data security and facial recognition hacking who have played pivotal roles in investigations into privacy violations like the Cambridge Analytica scandal.

Their resounding answer was yes, the privacy questions surrounding FaceApp — and other foreign-based apps like TikTok — are absolutely a cause for concern.

Aside from the Russia connection, they said huge hoards of facial data are ripe for selling to emerging illicit markets that use selfies to gain access to bank accounts. Research shows selfies can also be used to crack the facial identification systems used to secure some smartphones.

Finally, from a legal perspective, the fact that the company is in Russia does make a difference. Lumping FaceApp with U.S. companies is nonsensical — Americans have no legal means to sue a Russian company if it abuses their data, for one — and such statements may breed a defeatist mood that people feel around keeping their personal data secure.

Here are three reasons that FaceApp worries them.

1. The Russia factor (a.k.a. have you heard of the Yarovaya law?)

To apply those eye-catching filters, FaceApp uploads your photos and other user information to Amazon and Google clouds — previously without user consent. But this data can be stored and processed “in the United States or any other country in which FaceApp, its Affiliates or Service Providers maintain facilities,” according to the company’s privacy policy. Whether any of this data is ever copied and stored on servers in Russia is unknown, a gray area highlighted by multiple tech reporters including Forbes’ Thomas Brewster:

“The data in those Amazon data centers could be mirrored back to computers in Russia. It’s unclear how much access FaceApp employees have to those images, and Forbes hadn’t received comment from the company at the time of publication about just what it does with uploaded faces.”

If a user’s digital data is stored inside Russian borders, that country’s government can exert broad control over it, under legislation passed in 2016 called the Yarovaya law. Just last month, Russian security service FSB cited the law to demand user data from Los Angeles-based Tinder, a request to which the company reportedly agreed.

“If data is stored in Russia, the Russian government has jurisdiction over it,” said David Carroll, a media design professor at The New School in New York City. Carroll is best known for his legal bid to reclaim his data from Cambridge Analytica, an effort documented in a new Netflix film. “It is unimaginable that a tech company could exist in Russia and not have some kind of subservience to the FSB,” he said.

In the U.S., tech companies also have to fork over data to the government, but with legal limitations on when and where. The 2018 CLOUD Act gave U.S. law enforcement the ability to access the digital information of any person who uses services from a U.S.-based technology company — but only with a warrant or subpoena.

Those judicial safeguards may seem small against the overwhelming might of agencies like the FBI or NSA, but tech companies like Google have used these court procedures to block handing over information like location data. The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) have also fought and won recent cases involving law enforcement conducting warrantless collections of people’s data.

Which of the following is a concern based on a user taking pictures with a smartphone?

Building in St. Petersburg, Russia where the company developing FaceApp — Wireless Labs — is officially registered. Image by Reuters

“The U.S. has the FISA court process. We have the Constitution. We have the democratic process,” Carroll said. “We have some protections and backstops against the abuse of these powers, whereas in China and Russia they have none of that.”

Carroll included China because of the viral popularity of TikTok, the video app that’s grown a huge international base of mostly millennials and Generation Zers, rivaling YouTube, Facebook and Instagram in its number of active users.

In Quartz, Carroll argued that TikTok, which is owned by the Beijing-based company ByteDance, has a privacy policy that creates legal and privacy vulnerabilities for U.S. users — echoing his own experience with Cambridge Analytica. TikTok’s policy states that U.S. users do not have the right to know where their TikTok data is processed — nor how it shares your information with third parties (unless you’re a child in California, which has stricter laws under its Consumer Privacy Act.).

Those conditions for U.S. users are noteworthy because the company told Carroll that TikTok subscribers who used the platform before February 2019 may have had their data processed in China.

“As Americans, we have to wrestle with this, as we download fun apps like TikTok and FaceApp,” Carroll said. “That we are putting ourselves out of the jurisdiction of a constitutional republic into the jurisdictions of autocratic regimes. Is it worth the fun?”

WATCH: How FaceApp highlights a gap in U.S. privacy protections

The European Union has taken measures to crack down on these data sharing loopholes; European TikTok users have greater protections under the General Data Protection Regulation.

The PBS NewsHour contacted FaceApp on Friday to ask if its data could ever fall under the auspices of the Yarovaya law and if it would do everything in its power to keep user data from being accessed by the Russian government, but the company has not responded to our interview request.

2. Hackers can break into bank accounts with facial data

During the FaceApp news bonanza, many raised the prospect of the company’s data being acquired by the Russian government to build its facial recognition capabilities. Russia is certainly advancing its surveillance with facial recognition, and those activities have certainly involved the use of viral photo apps.

On the surface, such facial recognition software does not seem like much of a concern unless you travel to Russia. But huge databases of selfies can be treasure troves for people who conduct identity theft or build bots for Twitter and Facebook.

“There is a huge marketplace around fake IDs and the verification of identity that involve images of our faces,” said Alex Holden, chief information security officer of Hold Security LLC. “We also see an increased amount of traffic on the dark web of offers to create fake documents like driver’s licenses, passports and other I.D. cards.”

Holden said that’s concerning because institutions are increasingly relying on facial identification and automation to secure cryptocurrencies, bank accounts, electronic documents and international borders. These authentication systems come into play when companies ask you to upload copies of your government-issued IDs, Holden said.

A 2015 survey of the Dark Web marketplace Agora found similar trends in the popularity of counterfeit documents. Identity theft affects about one in 10 Americans over the age of 16 — about 26 million people — and the overwhelming majority of these cases involve existing bank and credit card accounts.

Account takeover fraud costs consumers about $5 billion in 2017, according to market analysis by Javelin Strategy & Research. And there are reports of scammers using fake IDs in person at phone stores to swap SIM cards — a growing way to hijack a person’s phone.

For scammers, the game is simple: They start by acquiring your personal details from one of any number of high-profile data breaches — Equifax, OPM, you name it. With your address, phone number and other information in tow, they can create the text needed for a fake I.D — but they still need a photo.

Which of the following is a concern based on a user taking pictures with a smartphone?

FaceApp uses artificial intelligence and machine learning to add lifelike alterations — like this aging filter — to a person’s selfie. Image by Reuters

One could argue that these risks exist with selfies taken on other apps like Instagram or Snap, but FaceApp stands out because it coaxes people into taking ID-style photographs.

“The accuracy of a facial recognition system depends on three main factors,” Anil Jain, a computer engineer at Michigan State University who studies biometric security. That’s your pose (looking head on at the camera), your facial expression and the lighting, he said.

If a scammer can acquire a suitable, front-facing photo of a person — say by gaining access to a giant database of selfies — then they could easily use those photos to bypass authentication protocols, Jain said. This risk is amplified by the fact that those selfies are linked to a person’s metadata — bits of info like geolocation and your device’s unique identifier that are automatically embedded into photos taken on a smartphone.

Speaking of smartphones, Jain and others have shown that facial locking systems on Androids can be easily deceived by using 2D photos of people or 3D-printed heads. FaceID on iPhone hasn’t been cracked, at least not credibly, but people are constantly probing Apple’s facial recognition system for weaknesses. Jain’s lab has found that recognition systems like Apple’s might be vulnerable if hackers can access the underlying data templates used to store FaceID profiles.

3. Framing privacy risks as inescapable makes them harder to escape

Americans would have few options to fight FaceApp if the company abuses their data, said Jennifer Lynch, EFF’s senior staff attorney and the surveillance litigation director, who agreed with Carroll on the matter.

“If you’re in the United States, you would have zero recourse,” Lynch said. Yet “the bigger problem is that for the most part, people don’t know where their data is going. That’s true of just about any app you download onto your phone.”

The reality is that consumers are generally unfamiliar with digital privacy policies. But conflating and downplaying the risks by saying that “FaceApp is taking your data, but so is everything else,” discourages consumers from fighting for privacy rights, others say.

“If you are faced with something that makes you uncomfortable but you think you can’t do anything about it, you explain it away,” said Chris Hoofnagle, a law professor at the University of California, Berkeley. “You might blame the victim. You might find ways to rationalize away the problem.”

Hoofnagle said those feelings are known as “digital resignation” — a concept developed by Joe Turow, a privacy expert at UPenn’s Annenberg School for Communication. Turow argues that people have become resigned to privacy violations because they feel like they have no alternative. In a review article published in March, Turow and his colleague Nora Draper lay out how corporations cultivate digital resignation in what the scholars dub “surveillance capitalism:”

“Resignation supports capitalism by constructing corporate power as an inevitable and immovable feature of contemporary life…Companies also discourage individuals from enlisting collective anger about, or even opting out of, commercial data retrieval, by highlighting conveniences and delights that come from engagement within systems that carry out surveillance.”

If you don’t trust these academics that tech companies are capitalizing on your defeatist feelings about digital privacy, then maybe you’ll listen to the tech companies themselves.

Two months ago, lawyers for Facebook argued its subscribers should have no expectation of privacy “because by sharing with a hundred friends on a social media platform, which is an affirmative social act to publish, to disclose, to share ostensibly private information with a hundred people, you have just, under centuries of common law, under the judgment of Congress, under the SCA, negated any reasonable expectation of privacy.” On Wednesday, the Federal Trade Commission issued a $5 billion fine against Facebook, stating that the company deceived its users on how it handled their phone numbers and facial recognition data.

Hoofnagle said Google executives have made similar statements in the past.

“When [Google CEO] Sundar Pichai says, ‘Privacy shouldn’t be a luxury good,’ your natural conclusion might be, ‘Oh, well, he means that everybody should have privacy,’” Hoofnagle said. “No, they mean that no one should have privacy.”

Despite these feelings of digital resignation, consumers DO want privacy. The Berkeley Consumer Privacy Survey, which Hoofnagle co-runs, shows that most Americans consider their mobile phone data to be private but are unsure of how to protect their information and expand their digital rights. Their research also shows what you may already suspect: People don’t take the time to read an app’s privacy statements.

Others have observed similar trends. In 2016, the Pew Research Center found 74 percent of American adults think it is very important to be in control of their information — but 91 percent also feel “consumers have lost control of how personal information is collected and used by companies.”

“Asking people to know about an app’s privacy rules is like asking them to know about the chemical ingredients of the foods they eat. It’s just beyond them,” Hoofnagle said. People need to be able to trust that someone is making sure their food (and their digital privacy) is safe, he added.

The EFF’s Lynch agreed, pointing out that local government agencies — such as law enforcement or state DMVs — are also harvesting and sharing data in suspicious ways.

“I don’t think that it’s actually the responsibility of the consumer or the American citizens to hold either Facebook or FaceApp to account. That’s the responsibility of regulators,” Lynch said. At the individual level, she said, there’s nothing really that anyone can do “absent just getting off of Facebook. But that’s not realistic for a vast number of people in the United States and abroad.”

Should you be worried about FaceApp?

That question is hard to answer without more information from the company.

FaceApp told TechCrunch and 9to5Mac that user data is not transferred to Russia, but again, its terms of service don’t rule out the possibility. Its statement also claims that it doesn’t sell or share information with third parties, even though its privacy policy disagrees. Their policy also allows the company to keep photos indefinitely, a point that has not been contradicted by their public statements:

“Most images are deleted from our servers within 48 hours from the upload date,” the company stated. Note they said “most” not “all” and based on everything written earlier, you have no legal way of getting them to verify if this claim is true.

Carroll, Jain and Holden said people who doubt the seriousness of their exposure with FaceApp should consider Russia’s interference campaign during the 2016 U.S. presidential election. A database full of selfies would be treasure for troll farms that target social media, and we already know from the Mueller Report that the Internet Research Agency, a Russian troll farm, used real photos to create fake profiles and sow confusion on social media.

Being able to tell the difference between real photos of people from fakes ones can be essential to social media companies when they’re fighting fake bot accounts. Jain’s lab, for example, has designed software that can accurately match a selfie of a person with a government-issued ID photo 97 percent of the time.

Which of the following is a concern based on a user taking pictures with a smartphone?

This illustration shows a user as she tries to edit her photo on FaceApp Photo by Nasir Kachroo/NurPhoto via Getty Images

“As these platforms increase their efforts to limit what they call inauthentic behaviors, it motivates bad actors to find ways to create more authentic representations of people,” Carroll said.

But he added that the most worrisome threats surrounding broad collections of facial data are the ones yet to be developed.

“Why are we waiting for the damage to be done rather than be more cautious and sensitive to these risks?” Carroll said, citing critics who say the risks from FaceApp are overblown. “People say ‘Well there’s no evidence that it’s happened yet.’ And I would say, ‘Well, why are we waiting for the disaster? Why are we waiting for a mass identity theft to happen?”

Gretchen Frazee, Dan Cooney and Berly McCoy contributed to reporting.

Which of the following is a common symptom of spyware?

Common Symptoms of Spyware & Adware Gaining a toolbar you did not intentionally install. Having your default home page changed. Web pages are automatically added to list of favorites. Popup advertising windows appear when the browser is not open or over Web pages that do not normally have popups.

Which of the following would most likely be considered for DLP?

The USB mass storage device would be the most likely asset to be considered for data loss prevention (DLP).

Which of the following technologies provides for pictures video or audio to be included in text messages?

An extension to the Short Message Service (SMS) protocol, MMS defines a way to send and receive, almost instantaneously, wireless messages that include images, audio, and video clips in addition to text.

Which devices best protect individual computers from unwanted Internet traffic?

Firewalls provide protection against outside cyber attackers by shielding your computer or network from malicious or unnecessary network traffic.