Security

Google starts pulling unvetted Android apps that access call logs and SMS messages

Posted by | Android, Apps, computing, Google, Google Play, google search, Mobile, privacy, product management, Security, smartphones, SMS | No Comments

Google is removing apps from Google Play that request permission to access call logs and SMS text message data but haven’t been manually vetted by Google staff.

The search and mobile giant said it is part of a move to cut down on apps that have access to sensitive calling and texting data.

Google said in October that Android apps will no longer be allowed to use the legacy permissions as part of a wider push for developers to use newer, more secure and privacy minded APIs. Many apps request access to call logs and texting data to verify two-factor authentication codes, for social sharing, or to replace the phone dialer. But Google acknowledged that this level of access can and has been abused by developers who misuse the permissions to gather sensitive data — or mishandle it altogether.

“Our new policy is designed to ensure that apps asking for these permissions need full and ongoing access to the sensitive data in order to accomplish the app’s primary use case, and that users will understand why this data would be required for the app to function,” wrote Paul Bankhead, Google’s director of product management for Google Play.

Any developer wanting to retain the ability to ask a user’s permission for calling and texting data has to fill out a permissions declaration.

Google will review the app and why it needs to retain access, and will weigh in several considerations, including why the developer is requesting access, the user benefit of the feature that’s requesting access and the risks associated with having access to call and texting data.

Bankhead conceded that under the new policy, some use cases will “no longer be allowed,” rendering some apps obsolete.

So far, tens of thousands of developers have already submitted new versions of their apps either removing the need to access call and texting permissions, Google said, or have submitted a permissions declaration.

Developers with a submitted declaration have until March 9 to receive approval or remove the permissions. In the meantime, Google has a full list of permitted use cases for the call log and text message permissions, as well as alternatives.

The last two years alone has seen several high-profile cases of Android apps or other services leaking or exposing call and text data. In late 2017, popular Android keyboard ai.type exposed a massive database of 31 million users, including 374 million phone numbers.

Powered by WPeMatico

Twitter bug revealed some Android users’ private tweets

Posted by | Android, Apps, bug, data, private data, public, Security, Social, TC, tweets, Twitter | No Comments

Twitter accidentally revealed some users’ “protected” (aka, private) tweets, the company disclosed this afternoon. The “Protect your Tweets” setting typically allows people to use Twitter in a non-public fashion. These users get to approve who can follow them and who can view their content. For some Android users over a period of several years, that may not have been the case — their tweets were actually made public as a result of this bug.

The company says that the issue impacted Twitter for Android users who made certain account changes while the “Protect your Tweets” option was turned on.

For example, if the user had changed their account email address, the “Protect your Tweets” setting was disabled.

We’ve become aware of and fixed an issue where the “Protect your Tweets” setting was disabled on Twitter for Android. Those affected have been alerted and we’ve turned the setting back on for them. More here: https://t.co/0qM5B1S393

— Twitter Support (@TwitterSupport) January 17, 2019

Twitter tells TechCrunch that’s just one example of an account change that could have prompted the issue. We asked for other examples, but the company declined to share any specifics.

What’s fairly shocking is how long this issue has been happening.

Twitter says that users may have been impacted by the problem if they made these account changes between November 3, 2014, and January 14, 2019 — the day the bug was fixed. 

The company has now informed those who were affected by the issue, and has re-enabled the “Protect your Tweets” setting if it had been disabled on those accounts. But Twitter says it’s making a public announcement because it “can’t confirm every account that may have been impacted.” (!!!)

The company explains to us it was only able to notify those people where it was able to confirm the account was impacted, but says it doesn’t have a complete list of impacted accounts. For that reason, it’s unable to offer an estimate of how many Twitter for Android users were affected in total.

This is a sizable mistake on Twitter’s part, as it essentially made available to the public content that users had explicitly indicated they wanted private. It’s unclear at this time if the issue will result in a GDPR violation and fine as a result.

The one bright spot is that some of the impacted users may have noticed their account had become public because they would have received alerts — like notifications that people were following them without their direct consent. That could have prompted the user to re-enable the “protect tweets” setting on their own. But they may have chalked up the issue to user error or a small glitch, not realizing it was a system-wide bug.

“We recognize and appreciate the trust you place in us, and are committed to earning that trust every day,” wrote Twitter in a statement. “We’re very sorry this happened and we’re conducting a full review to help prevent this from happening again.”

The company says it believes the issue is now fully resolved.

Powered by WPeMatico

Fortnite bugs put accounts at risk of takeover

Posted by | computer security, cryptography, fortnite, Gaming, Hack, hacking, Password, Prevention, Security, security breaches, software testing, spokesperson, vulnerability | No Comments

With one click, any semi-skilled hacker could have silently taken over a Fortnite account, according to a cybersecurity firm that says the bug is now fixed.

Researchers at Check Point say the three vulnerabilities chained together could have affected any of its 200 million players. The flaws, if exploited, would have stolen the account access token set on the gamer’s device once they entered their password.

Once stolen, that token could be used to impersonate the gamer and log in as if they were the account holder, without needing their password.

The researchers say that the flaw lies in how Epic Games, the maker of Fortnite, handles login requests. Researchers said they could send any user a crafted link that appears to come from Epic Games’ own domain and steal an access token needed to break into an account.

Check Point’s Oded Vanunu explains how the bug works. (Image: supplied)

“It’s important to remember that the URL is coming from an Epic Games domain, so it’s transparent to the user and any security filter will not suspect anything,” said Oded Vanunu, Check Point’s head of products vulnerability research, in an email to TechCrunch.

Here’s how it works: The user clicks on a link, which points to an epicgames.com subdomain, which the hacker embeds a link to malicious code on their own server by exploiting a cross-site weakness in the subdomain. Once the malicious script loads, unbeknownst to the Fortnite player, it steals their account token and sends it back to the hacker.

“If the victim user is not logged into the game, he or she would have to log in first,” said Vanunu. “Once that person is logged in, the account can be stolen.”

Epic Games has since fixed the vulnerability.

“We were made aware of the vulnerabilities and they were soon addressed,” said Nick Chester, a spokesperson for Epic Games. “We thank Check Point for bringing this to our attention.”

“As always, we encourage players to protect their accounts by not re-using passwords and using strong passwords, and not sharing account information with others,” he said.

When asked, Epic Games would not say if user data or accounts were compromised as a result of this vulnerability.

Powered by WPeMatico

Wrest control from a snooping smart speaker with this teachable ‘parasite’

Posted by | Advertising Tech, Alexa, artificial intelligence, connected devices, Europe, Gadgets, GitHub, Google, google home, hardware, Home Automation, Internet of Things, IoT, neural network, privacy, Security, smart assistant, smart speaker, Speaker | No Comments

What do you get when you put one internet-connected device on top of another? A little more control than you otherwise would in the case of Alias the “teachable ‘parasite’” — an IoT project smart speaker topper made by two designers, Bjørn Karmann and Tore Knudsen.

The Raspberry Pi-powered, fungus-inspired blob’s mission is to whisper sweet nonsense into Amazon Alexa’s (or Google Home’s) always-on ear so it can’t accidentally snoop on your home.

Project Alias from Bjørn Karmann on Vimeo.

Alias will only stop feeding noise into its host’s speakers when it hears its own wake command — which can be whatever you like.

The middleman IoT device has its own local neural network, allowing its owner to christen it with a name (or sound) of their choosing via a training interface in a companion app.

The open-source TensorFlow library was used for building the name training component.

So instead of having to say “Alexa” or “Ok Google” to talk to a commercial smart speaker — and thus being stuck parroting a big tech brand name in your own home, not to mention being saddled with a device that’s always vulnerable to vocal pranks (and worse: accidental wiretapping) — you get to control what the wake word is, thereby taking back a modicum of control over a natively privacy-hostile technology.

This means you could rename Alexa “Bezosallseeingeye,” or refer to your Google Home as “Carelesswhispers.” Whatever floats your boat.

Once Alias hears its custom wake command it will stop feeding noise into the host speaker — enabling the underlying smart assistant to hear and respond to commands as normal.

“We looked at how cordyceps fungus and viruses can appropriate and control insects to fulfill their own agendas and were inspired to create our own parasite for smart home systems,” explain Karmann and Knudsen in a write-up of the project here. “Therefore we started Project Alias to demonstrate how maker-culture can be used to redefine our relationship with smart home technologies, by delegating more power from the designers to the end users of the products.”

Alias offers a glimpse of a richly creative custom future for IoT, as the means of producing custom but still powerful connected technology products becomes more affordable and accessible.

And so also perhaps a partial answer to IoT’s privacy problem, for those who don’t want to abstain entirely. (Albeit, on the security front, more custom and controllable IoT does increase the hackable surface area — so that’s another element to bear in mind; more custom controls for greater privacy does not necessarily mesh with robust device security.)

If you’re hankering after your own Alexa-disrupting blob-topper, the pair have uploaded a build guide to Instructables and put the source code on GitHub. So fill yer boots.

Project Alias is of course not a solution to the underlying tracking problem of smart assistants — which harvest insights gleaned from voice commands to further flesh out interest profiles of users, including for ad targeting purposes.

That would require either proper privacy regulation or, er, a new kind of software virus that infiltrates the host system and prevents it from accessing user data. And — unlike this creative physical IoT add-on — that kind of tech would not be at all legal.

Powered by WPeMatico

Microsoft continues to build government security credentials ahead of JEDI decision

Posted by | Cloud, Enterprise, Government, Microsoft, Mobile, Outlook, Pentagon JEDI contract, Security, TC | No Comments

While the DoD is in the process of reviewing the $10 billion JEDI cloud contract RFPs (assuming the work continues during the government shutdown), Microsoft continues to build up its federal government security bona fides, regardless.

Today the company announced it has achieved the highest level of federal government clearance for the Outlook mobile app, allowing U.S. Government Community Cloud (GCC) High and Department of Defense employees to use the mobile app. This is on top of FedRamp compliance the company achieved last year.

“To meet the high level of government security and compliance requirements, we updated the Outlook mobile architecture so that it establishes a direct connection between the Outlook mobile app and the compliant Exchange Online backend services using a native Microsoft sync technology and removes middle tier services,” the company wrote in a blog post announcing the update.

The update will allow these highly security-conscious employees to access some of the more recent updates to Outlook Mobile, such as the ability to add a comment when canceling an event.

This is in line with government security updates the company made last year. While none of these changes are specifically designed to help win the $10 billion JEDI cloud contract, they certainly help make a case for Microsoft from a technology standpoint.

As Microsoft corporate vice president for Azure Julia White stated in a blog post last year, which we covered, “Moving forward, we are simplifying our approach to regulatory compliance for federal agencies, so that our government customers can gain access to innovation more rapidly.” The Outlook Mobile release is clearly in line with that.

Today’s announcement comes after the Pentagon announced just last week that it has awarded Microsoft a separate large contract for $1.7 billion. This involves providing Microsoft Enterprise Services for the Department of Defense (DoD), Coast Guard and the intelligence community, according to a statement from DoD.

All of this comes ahead of a decision on the massive $10 billion, winner-take-all cloud contract. Final RFPs were submitted in October and the DoD is expected to make a decision in April. The process has not been without controversy, with Oracle and IBM submitting formal protests even before the RFP deadline — and more recently, Oracle filing a lawsuit alleging the contract terms violate federal procurement laws. Oracle has been particularly concerned that the contract was designed to favor Amazon, a point the DoD has repeatedly denied.

Powered by WPeMatico

Schneider’s EVLink car charging stations were easily hackable, thanks to a hardcoded password

Posted by | automotive, broadband, charging stations, electric car, electric vehicles, energy, Gadgets, inductive charging, internet connectivity, New York, Security, transport | No Comments

Schneider has fixed three vulnerabilities in one of its popular electric car charging stations, which security researchers said could have easily allowed an attacker to remotely take over the unit.

At its worst, an attacker can force a plugged-in vehicle to stop charging, rendering it useless in a “denial-of-service state,” an attack favored by some threat actors as it’s an effective way of forcing something to stop working.

The bugs were fixed with a software update that rolled out on September 2, shortly after the bugs were first disclosed, and limited details of the bugs were revealed in a supporting document on December 20. A fuller picture of the vulnerabilities, found by New York-based security firm Positive Technologies, were released today — almost a month later.

Schneider’s EVLink charging stations come in all shapes and sizes — some for the garage wall and some at gas stations. It’s the charging stations at offices, hotels, shopping malls and parking garages that are vulnerable, said Positive.

At the center of Positive’s disclosure is Schneider’s EVLink Parking electric charging stations, one of several charging products that Schneider sells, and primarily marketed to apartment complexes, private parking area, offices and municipalities. These charging stations are, like others, designed for all-electric and plug-in hybrid electric vehicles — including Teslas, which have their own proprietary connector.

Because the EVLink Parking station can be connected to Schneider’s cloud with internet connectivity, either over a cell or a broadband connection, Positive said that the web-based user interface on the charging unit can be remotely accessed by anyone and easily send commands to the charging station — even while it’s in use.

“A hacker can stop the charging process, switch the device to the reservation mode, which would render it inaccessible to any customer until reservation mode is turned off, and even unlock the cable during the charging by manipulating the socket locking hatch, meaning attackers could walk away with the cable,” said Positive.

“For electric car drivers, this means not being able to use their vehicles since they cannot be charged,” it said. The company also said that it’s also possible to charge a car for free by exploiting these vulnerabilities.

Positive didn’t say what the since-removed password was. We asked for it — out of sheer curiosity more than anything — but the company isn’t releasing the password to prevent anyone exploiting the bug in unpatched systems.

The researchers, Vladimir Kononovich and Vyacheslav Moskvin, also found two other bugs that gives an attacker full access over a device — a code injection flaw and a SQL injection vulnerability. Both were fixed in the same software update.

When reached, a Schneider spokesperson did not immediately have comment. If that changes, we’ll update.

Additional reporting: Kirsten Korosec.

Updated at 12:15pm ET: with additional details, including about the unreleased password.

Powered by WPeMatico

Daily Crunch: Well Facebook, you did it again

Posted by | Apps, Daily Crunch, Facebook, Fundings & Exits, Gadgets, Hack, hardware, Mobile, Security, Social, Startups, WeWork | No Comments

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here:

1. Facebook is the new crapware 

Well Facebook, you did it again. Fresh off its latest privacy scandal, the troubled social media giant has inked a deal with Android to pre-install its app on an undisclosed number of phones and make the software permanent. This means you won’t be able to delete Facebook from those phones. Thanks, Facebook.

2. The world’s first foldable phone is real 

Chinese company Royole has beaten Samsung to the market and has been showing off a foldable phone/tablet this week at CES. While it’s not the most fluid experience, the device definitely works at adapting to your needs.

3. CES revokes award from female-founded sex tech company
Outcries of a double-standard are pouring out of CES after the Consumer Tech Association revoked an award from a company geared toward women’s sexual health.

4. Everything Google announced at CES 2019 

Google went all in on the Assistant this year at CES. The company boasted that the voice-enabled AI will make its way onto a billion devices by the end of the month — up from 400 million last year. But what’s most exciting is the expanded capabilities of Google’s Assistant. Soon you’ll be able to check into flights and translate conversations on the fly with a simple “Hey Google.”

5. Rebranding WeWork won’t work 

The company formerly known as WeWork has rebranded to the We Company, but its new strategy has the potential to plunge the company further into debt.

6. Despite promises to stop, US cell carriers are still selling your real-time phone location data

Last year a little-known company called LocationSmart came under fire after leaking location data from AT&T, Verizon, T-Mobile and Sprint users to shady customers. LocationSmart quickly buckled under public scrutiny and promised to stop selling user data, but few focused on another big player in the location tracking business: Zumigo.

7. The best and worst of CES 2019 

From monster displays to VR in cars, we’re breaking down the good, the bad and the ugly from CES 2019.

Powered by WPeMatico

Despite promises to stop, US cell carriers are still selling your real-time phone location data

Posted by | AT&T, john legere, locationsmart, Mobile, mobile technology, privacy, Ron Wyden, Security, sprint, T-Mobile, technology, United States, Verizon, wireless, Zumigo | No Comments

Last year, four of the largest U.S. cell carriers were caught selling and sending real-time location data of their customers to shady companies that sold it on to big spenders, who would use the data to track anyone “within seconds” for whatever reason they wanted.

At first, little-known company LocationSmart was obtaining (and leaking) real-time location data from AT&T, Verizon, T-Mobile and Sprint and selling access through another company, 3Cinteractive, to Securus, a prison technology company, which tracked phone owners without asking for their permission. This game of telephone with people’s private information was discovered, and the cell carriers, facing heavy rebuke from Sen. Ron Wyden, a privacy-minded lawmaker, buckled under the public pressure and said they’d stop selling and sharing customers’ locations.

And that would’ve been that — until it wasn’t.

Now, new reporting by Motherboard shows that while LocationSmart faced the brunt of the criticism, few focused on the other big player in the location-tracking business, Zumigo. A payment of $300 and a phone number was enough for a bounty hunter to track down the participating reporter by obtaining his location using Zumigo’s location data, which was continuing to pay for access from most of the carriers.

Worse, Zumigo sold that data on — like LocationSmart did with Securus — to other companies, like Microbilt, a Georgia-based credit reporting company, which in turn sells that data on to other firms that want that data. In this case, it was a bail bond company, whose bounty hunter was paid by Motherboard to track down the reporter — with his permission.

Everyone seemed to drop the ball. Microbilt said the bounty hunter shouldn’t have used the location data to track the Motherboard reporter. Zumigo said it didn’t mind location data ending up in the hands of the bounty hunter, but still cut Microbilt’s access.

But nobody quite dropped the ball like the carriers, which said they would not to share location data again.

T-Mobile, at the center of the latest location-selling revelations for passing the reporter’s location to the bounty hunter, said last year in the midst of the Securus scandal that it “reviewed” its real-time location data sharing program and found appropriate controls in place. To appease even the skeptical, T-Mobile chief executive John Legere tweeted at the time that he “personally evaluated the issue” and promised that the company “will not sell customer location data to shady middlemen.”

It’s hard to see how that isn’t, in hindsight, a downright lie.

Sounds like word hasn’t gotten to you, @ronwyden. I’ve personally evaluated this issue & have pledged that @tmobile will not sell customer location data to shady middlemen. Your consumer advocacy is admirable & we remain committed to consumer privacy. https://t.co/UPx3Xjhwog

John Legere (@JohnLegere) June 19, 2018

This time around, T-Mobile said it “does not have a direct relationship” with Microbilt but admitted one with Zumigo, which, given the story and the similarities to last year’s Securus scandal, could be considered one of many “shady middlemen” still obtaining location data from cell carriers.

Legere later said in a tweet late Wednesday that the company “is completely ending” its relationships with location aggregators in March, almost a year after the company was first implicated in the first location-sharing scandal.

It wasn’t just T-Mobile. Other carriers were also still selling and sharing their customers’ data.

AT&T said in last year’s letter it would “protect customer data” and “shut down” Securus’ access to its real-time store of customer location data. Most saw that as a swift move to prevent third-parties accessing customer location data. Now, AT&T seemed to renege on that year-ago pledge, saying it will “only permit the sharing of location” in limited cases, including when required by law.

Sprint didn’t say what its relationship was with either Zumigo or Microbilt, but once again — like last year — cited its privacy policy as its catch-all to sell and share customer location data. Yet Sprint, like its fellow carriers AT&T and T-Mobile, which pledged to stop selling location data, clearly didn’t complete its “process of terminating its current contracts with data aggregators to whom we provide location data” as it promised in a letter a year ago.

Verizon, the parent company of TechCrunch, wasn’t explicitly cleared from sharing location data with third-parties in Motherboard’s report — only that the bounty hunter refused to search for a Verizon number. (We’ve asked Verizon if it wants to clarify its position — so far, we’ve had nothing back.)

In a letter sent last year when the Securus scandal blew up, Verizon said it would “take steps to stop” sharing data with two firms — Zumigo and LocationSmart, an intermediary that passed on obtained location data to Securus. But that doesn’t mean it’s off the hook. It was still sharing location data with anyone who wanted to pay in the first place, putting its customers at risk from hackers, stalkers — or worse.

Wyden. who tweeted about the story, said carriers selling customer location data “is a nightmare for national security and the personal safety of anyone with a phone.” And yet there’s no way to opt out — shy of a legislative fix — given that two-thirds of the U.S. population aren’t going to switch to a carrier that doesn’t sell your location data.

It turns out, you really can’t trust your cell carrier. Who knew?

Powered by WPeMatico

Millions of Android users tricked into downloading 85 adware apps from Google Play

Posted by | Android, Apps, Google, Google Play, online marketplaces, privacy, Security, smartphones | No Comments

Another day, another batch of bad apps in Google Play.

Researchers at security firm Trend Micro have discovered dozens of apps, including popular utilities and games, to serve a ton of deceptively displayed ads — including full-screen ads, hidden ads and ads running in the background to squeeze as much money out of unsuspecting Android users.

In all, the researchers found 85 apps pushing adware, totaling at least 9 million affected users.

One app — a universal TV remote app for Android — had more than five million users alone, despite a rash of negative reviews and complaints that ads were “hidden in the background.” Other users said there were “so many ads, [they] can’t even use it.”

The researchers tested each app and found that most shared the same or similar code, and often the apps were similarly named. At every turn, tap or click, the app would display an ad, they found. In doing so, the app generates money for the app maker.

Some of the bad adware-ridden apps found by security researchers. (Image: Trend Micro)

Adware-fueled apps might not seem as bad as other apps packed with malware or hidden functionality, such as apps that pull malicious payloads from another server after the app is installed. At scale, that can amount to thousands of fraudulent ad dollars each week. Some ads also have a tendency to be malicious, containing hidden code that tries to trick users into installing malware on their phones or computers.

Some of the affected apps include: A/C Air Conditioner Remote, Police Chase Extreme City 3D Game, Easy Universal TV Remote, Garage Door Remote Control, Prado Parking City 3D Game and more. (You can find a full list of apps here.)

Google told TechCrunch that it had removed the apps, but a spokesperson did not comment further.

We tried reaching out to the universal TV remote app creator but the registered email on the since-removed Google Play store app points to a domain that no longer exists.

Despite Google’s best efforts in scanning apps before they’re accepted into Google Play, malicious apps are one of the biggest and most common threats to Android users. Google pulled more than 700,000 malicious apps from Google Play in the past year alone, and has tried to improve its back-end to prevent malicious apps from getting into the store in the first place.

Yet the search and mobile giant continues to battle rogue and malicious apps, pulling at least 13 malicious apps in a sweep in November alone.

Powered by WPeMatico

Security researchers find over a dozen iPhone apps linked to Golduck malware

Posted by | Apps, Gaming, privacy, Security | No Comments

Security researchers say they’ve found more than a dozen iPhone apps covertly communicating with a server associated with Golduck, a historically Android-focused malware that infects popular classic game apps.

The malware has been known about for over a year, after it was first discovered by Appthority infecting classic and retro games on Google Play, by embedding backdoor code that allowed malicious payloads to be silently pushed to the device. At the time, more than 10 million users were affected by the malware, allowing hackers to run malicious commands at the highest privileges, like sending premium SMS messages from a victim’s phone to make money.

Now, the researchers say iPhone apps linked to the malware could also present a risk.

Wandera, an enterprise security firm, said it found 14 apps — all retro-style games — that were communicating with the same command and control server used by the Golduck malware.

“The [Golduck] domain was on a watchlist we established due to its use in distributing a specific strain of Android malware in the past,” said Michael Covington, Wandera’s vice-president of product. “When we started seeing communication between iOS devices and the known malware domain, we investigated further.”

The apps include: Commando Metal: Classic ContraSuper Pentron Adventure: Super HardClassic Tank vs Super BomberSuper Adventure of MaritronRoy Adventure Troll GameTrap Dungeons: Super AdventureBounce Classic LegendBlock GameClassic Bomber: Super LegendBrain It On: Stickman PhysicsBomber Game: Classic BombermanClassic Brick – Retro BlockThe Climber Brick, and Chicken Shoot Galaxy Invaders.

According to the researchers, what they saw so far seems relatively benign — the command and control server simply pushes a list of icons in a pocket of ad space in the upper-right corner of the app. When the user opens the game, the server tells the app which icons and links it should serve to the user. They did, however, see the apps sending IP address data — and, in some cases, location data — back to the Golduck command and control server. TechCrunch verified their claims, running the apps on a clean iPhone through a proxy, allowing us to see where the data goes. Based on what we saw, the app tells the malicious Golduck server what app, version, device type, and the IP address of the device — including how many ads were displayed on the phone.

As of now, the researchers say that the apps are packed with ads — likely as a way to make a quick buck. But they expressed concern that the communication between the app and the known-to-be-malicious server could open up the app — and the device — to malicious commands down the line.

“The apps themselves are technically not compromised; while they do not contain any malicious code, the backdoor they open presents a risk for exposure that our customers do not want to take.

“A hacker could easily use the secondary advertisement space to display a link that redirects the user and dupes them into installing a provisioning profile or a new certificate that ultimately allows for a more malicious app to be installed,” said the researchers.

That could be said for any game or app, regardless of device maker or software. But the connection to a known malicious server isn’t a good look. Covington said that the company has “observed malicious content being shared from the server,” but that it wasn’t related to the games.

The implication is that if the server is sending malicious payloads to Android users, iPhone users could be next.

TechCrunch sent the list of apps to data insights firm Sensor Tower, which estimated that the 14 apps had been installed close to one million times since they were released — excluding repeated downloads or installs across different devices.

When we tried contacting the app makers, many of the App Store links pointed to dead links or to pages with boilerplate privacy policies but no contact information. The registrant on the Golduck domain appears to be fake, along with other domains associated with Golduck, which often have different names and email addresses.

Apple did not comment when reached prior to publication. The apps are appear to still be downloadable from the App Store, but all now say they are “not currently available in the U.S. store.”

Apple’s app stores may have a better rap than Google’s, which every once in a while lets malicious apps slip through the net. In reality, neither store is perfect. Earlier this year, security researchers found a top-tier app in the Mac App Store that was collecting users’ browsing history without permission, and dozens of iPhone apps that were sending user location data to advertisers without explicitly asking first.

For the average user, malicious apps remain the largest and most common threat to mobile users — even with locked down device software and the extensive vetting of apps.

If there’s one lesson, now and always: don’t download what you don’t need, or can’t trust.

Powered by WPeMatico