Solution Providers: FaceApp Security Concerns Are 'A Little Bit Overblown'

‘The reality is we're talking about single pictures that users intend to share with a larger community anyway,’ says Kudelski Security CTO Andrew Howard.

ARTICLE TITLE HERE

FaceApp doesn't appear to pose a significantly greater security risk to users than typical social media activity, solution provider experts told CRN.

"Given the type of data they're looking at, what they're doing with it and the lack of other sensitive data being brought in, this particular situation is a little bit overblown," said Andrew Howard, CTO of Phoenix-based solution provider Kudelski Security.

The photo filter app has skyrocketed to the top of the download charts this week due to its ability to transform the features of any face to simulate aging by adding graying hair and wrinkles. But FaceApp's growing popularity has created concern among Democratic officials due to the company being based in Russia. In a letter to the FBI and Federal Trade Commission Thursday, Sen. Chuck Schumer (D-N.Y.) called for an investigation into the company.

id
unit-1659132512259
type
Sponsored post

[Related: Kudelski Security Names New CEO To Fuel IoT, Blockchain Investments]

The Russian government's enormous influence does create a greater risk in that the data provided to a commercial entity could be used in an undesirable way, Howard said. But the concern isn't as significant here since no sensitive private or commercial data is being shared with FaceApp beyond what many users already post on their social networks, according to Howard.

"The reality is we're talking about single pictures that users intend to share with a larger community anyway," Howard said. "In all likelihood, users are likely sharing their information in places that have the same if not higher risk."

FaceApp told TechCrunch in a statement that while its research and development teams are based in Russia, no user data is transferred there. The company didn't immediately respond to a request for comment from CRN.

Facial recognition data has become particularly sensitive in recent months due to the creation of high-profile deepfake videos, where artificial intelligence algorithms are combined with actual video footage and photos to create videos that look legitimate but are fake, said Brian Wrozek, vice president of corporate security at Denver-based Optiv Security, No. 27 on the 2019 CRN Solution Provider 500.

Celebrities, corporate executives and other high-risk users should be cautious when giving a third-party application like FaceApp access to photos stored on a mobile device, Wrozek said. A typical user, though, has less to be concerned about since bad actors don't have much of a reason to create deepfake videos of random people, according to Wrozek.

"You've got to think in a bigger context when looking at your own risk profile," Wrozek said.

Nonetheless, sharing photos with a third-party application presents more risk than typical text-based data sharing since images are typically more information-rich from a metadata perspective, Howard said. Users often share more information in a photo than they intended given that geotag information as well as GPS information of where the photo was taken are often embedded in the image, Howard said.

In the big picture, though, Howard said a foreign adversary looking to collect lots of pictures of people's faces would have many options available to them outside photo filtering apps.

"The average consumer has so many photos online that this is only part of the threat," Howard said.

State governments already have pictures of most of their citizens thanks to driver's licenses, and might have the capacity to apply facial recognition technology to it, said Tom Turkot, vice president of client solutions for Buffalo Grove, Ill.-based Arlington Computer Products.

"We've given up a lot of our information already," Turkot said. "I don't see it being a big deal."

Wrozek was pleased to see that FaceApp said it deletes most images from its servers within 48 hours of the upload time and that FaceApp said it does not sell user information to third parties outside FaceApp or the group of companies of which it is a part. Going forward, Wrozek would like to know how FaceApp goes about ensuring compliance with these policies in the months and years ahead.

Similarly, Howard said he was happy to hear that security researchers have found FaceApp only processes photos selected by users and isn't taking in additional images or data without the customer's permission.

"They seem to be playing by the rules," Howard said.

FaceApp should be transparent about what information it has on users, make it easy for users to request that their information be deleted without having to dig through links, and bring in third-party auditors to ensure that policies and procedures are adhered to, he said. Users, meanwhile, should delete FaceApp once they're finished using it to maintain good data hygiene and minimize their exposure, Wrozek said.

Given that FaceApp isn't based in the U.S. or European Union, it's less clear who would be regulating or auditing the company to ensure that records are being retained or shared in accordance with policy, Turkot said. Customers would benefit from random reviews to ensure that images are actually going down after 48 hours as well as audits into whether or not user data is being sold, he said.

Users should also keep an eye on the terms of service to ensure they aren't changed in a way that puts them more at risk, Wrozek said. Howard cautioned that policies are often subject to change, inconsistently enforced, and typically overlooked by consumers.

As a result, Howard said an extremely small percentage of users actually notice that a business has changed its data privacy policy. Generally, Howard said customers that aren't paying for an application should assume the app is making money through sharing data with advertisers or for some financial motivation.

"If they're not going to take a fee, that's really their only option," Howard said.