cryptography

Google is making autofill on Chrome for mobile more secure

Posted by | Access Control, Android, biometrics, Chrome, computing, cryptography, Google, Identification, identity management, internet security, Mobile, Password, password manager, Security, smartphones, TC | No Comments

Google today announced a new autofill experience for Chrome on mobile that will use biometric authentication for credit card transactions, as well as an updated built-in password manager that will make signing in to a site a bit more straightforward.

Image Credits: Google

Chrome already uses the W3C WebAuthn standard for biometric authentication on Windows and Mac. With this update, this feature is now also coming to Android .

If you’ve ever bought something through the browser on your Android phone, you know that Chrome always asks you to enter the CVC code from your credit card to ensure that it’s really you — even if you have the credit card number stored on your phone. That was always a bit of a hassle, especially when your credit card wasn’t close to you.

Now, you can use your phone’s biometric authentication to buy those new sneakers with just your fingerprint — no CVC needed. Or you can opt out, too, as you’re not required to enroll in this new system.

As for the password manager, the update here is the new touch-to-fill feature that shows you your saved accounts for a given site through a standard Android dialog. That’s something you’re probably used to from your desktop-based password manager already, but it’s definitely a major new built-in convenience feature for Chrome — and the more people opt to use password managers, the safer the web will be. This new feature is coming to Chrome on Android in the next few weeks, but Google says that “is only the start.”

Image Credits: Google

 

Powered by WPeMatico

Decrypted: iOS 13.5 jailbreak, FBI slams Apple, VCs talk cybersecurity

Posted by | Android, apple-app-store, Cisco, civil liberties, computer security, cryptography, cybercrime, Cyberwarfare, dark web, data breach, encryption, Extra Crunch, iPhone, Market Analysis, Mobile, privacy, Security, smartphones, Startups, Troy Hunt, video conferencing, WebEX | No Comments

It was a busy week in security.

Newly released documents shown exclusively to TechCrunch show that U.S. immigration authorities used a controversial cell phone snooping technology known as a “stingray” hundreds of times in the past three years. Also, if you haven’t updated your Android phone in a while, now would be a good time to check. That’s because a brand-new security vulnerability was found — and patched. The bug, if exploited, could let a malicious app trick a user into thinking they’re using a legitimate app that can be used to steal passwords.

Here’s more from the week.


THE BIG PICTURE

Every iPhone now has a working jailbreak

Powered by WPeMatico

Decrypted: No warrants for web data, UK grid cyberattack, CyberArk buys Idaptive

Posted by | california, cryptography, cyberattacks, cybercrime, Decrypted, electricity, Exit, Extra Crunch, iPhone, Market Analysis, Mobile, north america, ransomware, Recent Funding, Security, senate, Startups, U.S. government | No Comments

One vote.

That’s all it needed for a bipartisan Senate amendment to pass that would have stopped federal authorities from further accessing millions of Americans’ browsing records. But it didn’t. One Republican was in quarantine, another was AWOL. Two Democratic senators — including former presidential hopeful Bernie Sanders — were nowhere to be seen and neither returned a request for comment.

It was one of several amendments offered up in the effort to reform and reauthorize the Foreign Intelligence Surveillance Act, the basis of U.S. spying laws. The law, signed in 1978, put restrictions on who intelligence agencies could target with their vast listening and collection stations. But after the Edward Snowden revelations in 2013, lawmakers champed at the bit to change the system to better protect Americans, who are largely protected from the spies within its borders.

One privacy-focused amendment, brought by Sens. Mike Lee and Patrick Leahy, passed — permits for more independent oversight to the secretive and typically one-sided Washington, D.C. court that authorizes government surveillance programs, the Foreign Intelligence Surveillance Court. That amendment all but guarantees the bill will bounce back to the House for further scrutiny.

Here’s more from the week.


THE BIG PICTURE

Three years after WannaCry, U.S. still on North Korea’s tail

A feature-length profile in Wired magazine looks at the life of Marcus Hutchins, one of the heroes who helped stop the world’s biggest cyberattack three years to the day.

The profile — a 14,000-word cover story — examines his part in halting the spread of the global WannaCry ransomware attack and how his early days led him into a criminal world that prompted him to plead guilty to felony hacking charges. Thanks in part to his efforts in saving the internet, he was sentenced to time served and walked free.

Powered by WPeMatico

Apple and Google update joint coronavirus tracing tech to improve user privacy and developer flexibility

Posted by | Android, Apple, Apps, Bluetooth, Cédric O, contacts tracing, coronavirus, COVID-19, cryptography, dave burke, Europe, european union, France, Germany, Google, Health, privacy, TC | No Comments

Apple and Google have provided a number of updates about the technical details of their joint contact tracing system, which they’re now exclusively referring to as an “exposure notification” technology, since the companies say this is a better way to describe what they’re offering. The system is just one part of a contact tracing system, they note, not the entire thing. Changes include modifications made to the API that the companies say provide stronger privacy protections for individual users, and changes to how the API works that they claim will enable health authorities building apps that make use of it to develop more effective software.

The additional measures being implemented to protect privacy include changing the cryptography mechanism for generating the keys used to trace potential contacts. They’re no longer specifically bound to a 24-hour period, and they’re now randomly generated instead of derived from a so-called “tracing key” that was permanently attached to a device. In theory, with the old system, an advanced enough attack with direct access to the device could potentially be used to figure out how individual rotating keys were generated from the tracing key, though that would be very, very difficult. Apple and Google clarified that it was included for the sake of efficiency originally, but they later realized they didn’t actually need this to ensure the system worked as intended, so they eliminated it altogether.

The new method makes it even more difficult for a would-be bad actor to determine how the keys are derived, and then attempt to use that information to use them to track specific individuals. Apple and Google’s goal is to ensure this system does not link contact tracing information to any individual’s identity (except for the individual’s own use) and this should help further ensure that’s the case.

The companies will now also be encrypting any metadata associated with specific Bluetooth signals, including the strength of signal and other info. This metadata can theoretically be used in sophisticated reverse identification attempts, by comparing the metadata associated with a specific Bluetooth signal with known profiles of Bluetooth radio signal types as broken down by device and device generation. Taken alone, it’s not much of a risk in terms of exposure, but this additional step means it’s even harder to use that as one of a number of vectors for potential identification for malicious use.

It’s worth noting that Google and Apple say this is intended as a fixed length service, and so it has a built-in way to disable the feature at a time to be determined by regional authorities, on a case-by-case basis.

Finally on the privacy front, any apps built using the API will now be provided exposure time in five-minute intervals, with a maximum total exposure time reported of 30 minutes. Rounding these to specific five-minute duration blocks and capping the overall limit across the board helps ensure this info, too, is harder to link to any specific individual when paired with other metadata.

On the developer and health authority side, Apple and Google will now be providing signal strength information in the form of Bluetooth radio power output data, which will provide a more accurate measure of distance between two devices in the case of contact, particularly when used with existing received signal strength info from the corresponding device that the API already provides access to.

Individual developers can also set their own parameters in terms of how strong a signal is and what duration will trigger an exposure event. This is better for public health authorities because it allows them to be specific about what level of contact actually defines a potential contact, as it varies depending on geography in terms of the official guidance from health agencies. Similarly, developers can now determine how many days have passed since an individual contact event, which might alter their guidance to a user (i.e. if it’s already been 14 days, measures would be very different from if it’s been two).

Apple and Google are also changing the encryption algorithm used to AES, from the HMAC system they were previously using. The reason for this switch is that the companies have found that by using AES encryption, which can be accelerated locally using on-board hardware in many mobile devices, the API will be more energy efficiency and have less of a performance impact on smartphones.

As we reported Thursday, Apple and Google also confirmed that they’re aiming to distribute next week the beta seed version of the OS update that will support these devices. On Apple’s side, the update will support any iOS hardware released over the course of the past four years running iOS 13. On the Android side, it would cover around 2 billion devices globally, Android said.

Coronavirus tracing: Platforms versus governments

One key outstanding question is what will happen in the case of governments that choose to use centralized protocols for COVID-19 contact tracing apps, with proximity data uploaded to a central server — rather than opting for a decentralized approach, which Apple and Google are supporting with an API.

In Europe, the two major EU economies, France and Germany, are both developing contact tracing apps based on centralized protocols — the latter planning deep links to labs to support digital notification of COVID-19 test results. The U.K. is also building a tracing app that will reportedly centralize data with the local health authority.

This week Bloomberg reported that the French government is pressuring Apple to remove technical restrictions on Bluetooth access in iOS, with the digital minister, Cedric O, saying in an interview Monday: “We’re asking Apple to lift the technical hurdle to allow us to develop a sovereign European health solution that will be tied our health system.”

While a German-led standardization push around COVID-19 contact tracing apps, called PEPP-PT — that’s so far only given public backing to a centralized protocol, despite claiming it will support both approaches — said last week that it wants to see changes to be made to the Google-Apple API to accommodate centralized protocols.

Asked about this issue an Apple spokesman told us it’s not commenting on the apps/plans of specific countries. But the spokesman pointed back to a position on Bluetooth it set out in an earlier statement with Google — in which the companies write that user privacy and security are “central” to their design.

Judging by the updates to Apple and Google’s technical specifications and API framework, as detailed above, the answer to whether the tech giants will bow to government pressure to support state centralization of proximity social graph data looks to be a strong “no.”

The latest tweaks look intended to reinforce individual privacy and further shrink the ability of outside entities to repurpose the system to track people and/or harvest a map of all their contacts.

The sharpening of the Apple and Google’s nomenclature is also interesting in this regard — with the pair now talking about “exposure notification” rather than “contact tracing” as preferred terminology for the digital intervention. This shift of emphasis suggests they’re keen to avoid any risk of their role being (mis)interpreted as supporting broader state surveillance of citizens’ social graphs, under the guise of a coronavirus response.

Backers of decentralized protocols for COVID-19 contact tracing — such as DP-3T, a key influence for the Apple-Google joint effort that’s being developed by a coalition of European academics — have warned consistently of the risk of surveillance creep if proximity data is pooled on a central server.

Apple and Google’s change of terminology doesn’t bode well for governments with ambitions to build what they’re counter-branding as “sovereign” fixes — aka data grabs that do involve centralizing exposure data. Although whether this means we’re headed for a big standoff between certain governments and Apple over iOS security restrictions — à la Apple vs the FBI — remains to be seen.

Earlier today, Apple and Google’s EU privacy chiefs also took part in a panel discussion organized by a group of European parliamentarians, which specifically considered the question of centralized versus decentralized models for contact tracing.

Asked about supporting centralized models for contact tracing, the tech giants offered a dodge, rather than a clear “no.”

“Our goal is to really provide an API to accelerate applications. We’re not obliging anyone to use it as a solution. It’s a component to help make it easier to build applications,” said Google’s Dave Burke, VP of Android engineering.

“When we build something we have to pick an architecture that works,” he went on. “And it has to work globally, for all countries around the world. And when we did the analysis and looked at different approaches we were very heavily inspired by the DP-3T group and their approach — and that’s what we have adopted as a solution. We think that gives the best privacy preserving aspects of the contacts tracing service. We think it’s also quite rich in epidemiological data that we think can be derived from it. And we also think it’s very flexible in what it could do. [The choice of approach is] really up to every member state — that’s not the part that we’re doing. We’re just operating system providers and we’re trying to provide a thin layer of an API that we think can help accelerate these apps but keep the phone in a secure, private mode of operation.”

“That’s really important for the expectations of users,” Burke added. “They expect the devices to keep their data private and safe. And then they expect their devices to also work well.”

DP-3T’s Michael Veale was also on the panel — busting what he described as some of the “myths” about decentralized contacts tracing versus centralized approaches.

“The [decentralized] system is designed to provide data to epidemiologists to help them refine and improve the risk score — even daily,” he said. “This is totally possible. We can do this using advanced methods. People can even choose to provide additional data if they want to epidemiologists — which is not really required for improving the risk score but might help.”

“Some people think a decentralized model means you can’t have a health authority do that first call [to a person exposed to a risk of infection]. That’s not true. What we don’t do is we don’t tag phone numbers and identities like a centralized model can to the social network. Because that allows misuse,” he added. “All we allow is that at the end of the day the health authority receives a list separate from the network of whose phone number they can call.”

MEP Sophie in ‘t Veld, who organzied the online event, noted at the top of the discussion they had also invited PEPP-PT to join the call but said no one from the coalition had been able to attend the video conference.

Powered by WPeMatico

Test and trace with Apple and Google

Posted by | alipay, america, Android, Apple, apple inc, Bluetooth, China, Companies, computing, cryptography, digital rights, encryption, Google, google play services, human rights, MIT, NHS, operating system, Opinion, privacy, Singapore, south korea, surveillance, TC, terms of service, United Kingdom, world health organization | No Comments

After the shutdown, the testing and tracing. “Trace, test and treat is the mantra … no lockdowns, no roadblocks and no restriction on movement” in South Korea. “To suppress and control the epidemic, countries must isolate, test, treat and trace,” say WHO.

But what does “tracing” look like exactly? In Singapore, they use a “TraceTogether” app, which uses Bluetooth to track nearby phones (without location tracking), keeps local logs of those contacts, and only uploads them to the Ministry of Health when the user chooses/consents, presumably after a diagnosis, so those contacts can be alerted. Singapore plans to open-source the app.

In South Korea, the government texts people to let them know if they were in the vicinity of a diagnosed individual. The information conveyed can include the person’s age, gender, and detailed location history. Subsequently, even more details may be made available:

The level of detail provided by @Seoul_gov for each and every COVID-19 case in the city is astonishing:

Last name (which I’ve obscured)
Sex
Birth year
District of residence
Profession
Travel history
Contact with known cases
Hospital where they’re being treated pic.twitter.com/GsI0QQPcVH

— Victoria Kim (@vicjkim) March 24, 2020

In China, as you might expect, the surveillance is even more pervasive and draconian. Here, the pervasive apps Alipay and WeChat now include health codes – green, yellow, or red – set by the Chinese government, using opaque criteria. This health status is then used in hundreds of cities (and soon nationwide) to determine whether people are allowed to e.g. ride the subway, take a train, enter a building, or even exit a highway.

What about us, in the rich democratic world? Are we OK with the Chinese model? Of course not. The South Korean model? …Probably not. The Singaporean model? …Maybe. (I suspect it would fly in my homeland of Canada, for instance.) But the need to install a separate app, with TraceTogether or the directionally similar MIT project Safe Paths, is a problem. It works in a city-state like Singapore but will be much more problematic in a huge, politically divided nation like America. This will lead to inferior data blinded by both noncompliance and selection bias.

More generally, at what point does the urgent need for better data collide with the need to protect individual privacy and avoid enabling the tools for an aspiring, or existing, police state? And let’s not kid ourselves; the pandemic increases, rather than diminishes, the authoritarian threat.

Maybe, like the UK’s NHS, creators of new pandemic data infrastructures will promise “Once the public health emergency situation has ended, data will either be destroyed or returned” — but not all organizations instill the required level of trust in their populace. This tension has provoked heated discussion around whether we should create new surveillance systems to help mitigate and control the pandemic.

This surprises me greatly. Wherever you may be on that spectrum, there is no sense whatsoever in creating a new surveillance system — seeing as how multiple options already exist. We don’t like to think about it, much, but the cold fact is that two groups of entities already collectively have essentially unfettered access to all our proximity (and location) data, as and when they choose to do so.

I refer of course to the major cell providers, and to Apple & Google . This was vividly illustrated by data company Tectonix in a viral visualization of the spread of Spring Break partygoers:

Want to see the true potential impact of ignoring social distancing? Through a partnership with @xmodesocial, we analyzed secondary locations of anonymized mobile devices that were active at a single Ft. Lauderdale beach during spring break. This is where they went across the US: pic.twitter.com/3A3ePn9Vin

— Tectonix GEO (@TectonixGEO) March 25, 2020

Needless to say, Apple and Google, purveyors of the OSes on all those phones, have essentially the same capability as and when they choose to exercise it. An open letter from “technologists, epidemiologists & medical professionals” calls on “Apple, Google, and other mobile operating system vendors” (the notion that any other vendors are remotely relevant is adorable) “to provide an opt-in, privacy preserving OS feature to support contact tracing.”

They’re right. Android and iOS could, and should, add and roll out privacy-preserving, interoperable, TraceTogether-like functionality at the OS level (or Google Play Services level, to split fine technical hairs.) Granted, this means relying on corporate surveillance, which makes all of us feel uneasy. But at least it doesn’t mean creating a whole new surveillance infrastructure. Furthermore, Apple and Google, especially compared to cellular providers, have a strong institutional history and focus on protecting privacy and limiting the remit of their surveillance.

(Don’t believe me? Apple’s commitment to privacy has long been a competitive advantage. Google offers a thorough set of tools to let you control your data and privacy settings. I ask you: where is your cell service provider’s equivalent? Ah. Do you expect it to ever create one? I see. Would you also be interested in this fine, very lightly used Brooklyn Bridge I have on sale?)

Apple and Google are also much better suited to the task of preserving privacy by “anonymizing” data sets (I know, I know, but see below), or, better yet, preserving privacy via some form(s) of differential privacy and/or homomorphic encryption — or even some kind of zero-knowledge cryptography, he handwaved wildly. And, on a practical level, they’re more able than a third-party app developer to ensure a background service like that stays active.

Obviously this should all be well and firmly regulated. But at the same time, we should remain cognizant of the fact that not every nation believes in such regulation. Building privacy deep into a contact-tracing system, to the maximum extent consonant with its efficacy, is especially important when we consider its potential usage in authoritarian nations who might demand the raw data. “Anonymized” location datasets admittedly tend to be something of an oxymoron, but authoritarians may still be technically stymied by the difficulty of deanonymization; and if individual privacy can be preserved even more securely than that via some elegant encryption scheme, so much the better.

Compared to the other alternatives — government surveillance; the phone companies; or some new app, with all the concomitant friction and barriers to usage — Apple and Google are by some distance the least objectionable option. What’s more, in the face of this global pandemic they could roll out their part of the test-and-trace solution to three billion users relatively quickly. If we need a pervasive pandemic surveillance system, then let’s use one which (though we don’t like to talk about it) already exists, in the least dangerous, most privacy-preserving way.

Powered by WPeMatico

Ring’s new security ‘control center’ isn’t nearly enough

Posted by | Amazon, credential stuffing, cryptography, encryption, Gadgets, hardware, law enforcement, multi-factor authentication, privacy, ring, Security, security breaches | No Comments

On the same day that a Mississippi family is suing Amazon -owned smart camera maker Ring for not doing enough to prevent hackers from spying on their kids, the company has rolled out its previously announced “control center,” which it hopes will make you forget about its verifiably “awful” security practices.

In a blog post out Thursday, Ring said the new “control center,” “empowers” customers to manage their security and privacy settings.

Ring users can check to see if they’ve enabled two-factor authentication, add and remove users from the account, see which third-party services can access their Ring cameras and opt-out of allowing police to access their video recordings without the user’s consent.

But dig deeper and Ring’s latest changes still do practically nothing to change some of its most basic, yet highly criticized security practices.

Questions were raised over these practices months ago after hackers were caught breaking into Ring cameras and remotely watching and speaking to small children. The hackers were using previously compromised email addresses and passwords — a technique known as credential stuffing — to break into the accounts. Some of those credentials, many of which were simple and easy to guess, were later published on the dark web.

Yet, Ring still has not done anything to mitigate this most basic security problem.

TechCrunch ran several passwords through Ring’s sign-up page and found we could enter any easy to guess password, like “12345678” and “password” — which have consistently ranked as some of the most common passwords for several years running.

To combat the problem, Ring said at the time users should enable two-factor authentication, a security feature that adds an additional check to prevent account breaches like password spraying, where hackers use a list of common passwords in an effort to brute force their way into accounts.

But Ring still uses a weak form of two-factor authentication, sending you a code by text message. Text messages are not secure and can be compromised through interception and SIM swapping attacks. Even NIST, the government’s technology standards body, has deprecated support for text message-based two-factor. Experts say although text-based two-factor is better than not using it at all, it’s far less secure than app-based two-factor, where codes are delivered over an encrypted connection to an app on your phone.

Ring said it’ll make its two-factor authentication feature mandatory later this year, but has yet to say if it will ever support app-based two-factor authentication in the future.

The smart camera maker has also faced criticism for its cozy relationship with law enforcement, which has lawmakers concerned and demanding answers.

Ring allows police access to users’ videos without a subpoena or a warrant. (Unlike its parent company Amazon, Ring still does not publish the number of times police demand access to customer videos, with or without a legal request.)

Ring now says its control center will allow users to decide if police can access their videos or not.

But don’t be fooled by Ring’s promise that police “cannot see your video recordings unless you explicitly choose to share them by responding to a specific video request.” Police can still get a search warrant or a court order to obtain your videos, which isn’t particularly difficult if police can show there’s reasonable grounds that it may contain evidence — such as video footage — of a crime.

There’s nothing stopping Ring, or any other smart home maker, from offering a zero-knowledge approach to customer data, where only the user has the encryption keys to access their data. Ring cutting itself (and everyone else) out of the loop would be the only meaningful thing it could do if it truly cares about its users’ security and privacy. The company would have to decide if the trade-off is worth it — true privacy for its users versus losing out on access to user data, which would effectively kill its ongoing cooperation with police departments.

Ring says that security and privacy has “always been our top priority.” But if it’s not willing to work on the basics, its words are little more than empty promises.

Powered by WPeMatico

‘Magic: The Gathering’ game maker exposed 452,000 players’ account data

Posted by | Bucket, computer security, cryptography, database, Europe, game developer, Gaming, General Data Protection Regulation, Government, information commissioner's office, Password, player, Prevention, Security, spokesperson, United Kingdom, washington | No Comments

The maker of Magic: The Gathering has confirmed that a security lapse exposed the data on hundreds of thousands of game players.

The game’s developer, the Washington-based Wizards of the Coast, left a database backup file in a public Amazon Web Services storage bucket. The database file contained user account information for the game’s online arena. But there was no password on the storage bucket, allowing anyone to access the files inside.

The bucket is not believed to have been exposed for long — since around early-September — but it was long enough for U.K. cybersecurity firm Fidus Information Security to find the database.

A review of the database file showed there were 452,634 players’ information, including about 470 email addresses associated with Wizards’ staff. The database included player names and usernames, email addresses, and the date and time of the account’s creation. The database also had user passwords, which were hashed and salted, making it difficult but not impossible to unscramble.

None of the data was encrypted. The accounts date back to at least 2012, according to our review of the data, but some of the more recent entries date back to mid-2018.

A formatted version of the database backup file, redacted, containing 452,000 user records. (Image: TechCrunch)

Fidus reached out to Wizards of the Coast but did not hear back. It was only after TechCrunch reached out that the game maker pulled the storage bucket offline.

Bruce Dugan, a spokesperson for the game developer, told TechCrunch in a statement: “We learned that a database file from a decommissioned website had inadvertently been made accessible outside the company.”

“We removed the database file from our server and commenced an investigation to determine the scope of the incident,” he said. “We believe that this was an isolated incident and we have no reason to believe that any malicious use has been made of the data,” but the spokesperson did not provide any evidence for this claim.

“However, in an abundance of caution, we are notifying players whose information was contained in the database and requiring them to reset their passwords on our current system,” he said.

Harriet Lester, Fidus’ director of research and development, said it was “surprising in this day and age that misconfigurations and lack of basic security hygiene still exist on this scale, especially when referring to such large companies with a userbase of over 450,000 accounts.”

“Our research team work continuously, looking for misconfigurations such as this to alert companies as soon as possible to avoid the data falling into the wrong hands. It’s our small way of helping make the internet a safer place,” she told TechCrunch.

The game maker said it informed the U.K. data protection authorities about the exposure, in line with breach notification rules under Europe’s GDPR regulations. The U.K.’s Information Commissioner’s Office did not immediately return an email to confirm the disclosure.

Companies can be fined up to 4% of their annual turnover for GDPR violations.

Powered by WPeMatico

Hackers to stress-test Facebook Portal at hacking contest

Posted by | Apps, computer security, computing, cryptography, cybercrime, Facebook, Facebook Portal, Hack, hacker, hardware, Mobile, national security, Oculus, privacy, pwn2own, Security, Software, tokyo, Trend Micro, Virtual reality, web browser | No Comments

Hackers will soon be able to stress-test the Facebook Portal at the annual Pwn2Own hacking contest, following the introduction of the social media giant’s debut hardware device last year.

Pwn2Own is one of the largest hacking contests in the world, where security researchers descend to find and demonstrate their exploits for vulnerabilities in a range of consumer electronics and technologies, including appliances and automobiles.

It’s not unusual for companies to allow hackers put their products through their paces. Tesla earlier this year entered its new Model 3 sedan into the contest. A pair of researchers later scooped up $375,000 — and the car they hacked — for finding a severe memory randomization bug in the web browser of the car’s infotainment system.

Hackers able to remotely inject and run code on the Facebook Portal can receive up to $60,000, while a non-invasive physical attack or a privilege escalation bug can net $40,000.

Introducing the Facebook Portal is part of a push by Trend Micro’s Zero Day Initiative, which runs the contest, to expand the range of home automation devices available to researchers in attendance. Pwn2Own said researchers will also get a chance to try to hack an Amazon Echo Show 5, a Google Nest Hub Max, an Amazon Cloud Cam and a Nest Cam IQ Indoor.

Facebook said it also would allow hackers to find flaws in the Oculus Quest virtual reality kit.

Pwn2Own Tokyo, set to be held on November 6-7, is expected to dish out more than $750,000 in cash and prizes.

Powered by WPeMatico

Yubico launches its dual USB-C and Lightning two-factor security key

Posted by | Apps, authentication, computer security, cryptography, Gadgets, gmail, hardware, iPad, iPhone, macbooks, mobile devices, Password, password manager, Security, security token, Yubico, Yubikey | No Comments

Almost two months after it was first announced, Yubico has launched the YubiKey 5Ci, a security key with dual support for iPhones, Macs and other USB-C compatible devices.

Yubico’s newest YubiKey is the latest iteration of its security key built to support a newer range of devices, including Apple’s iPhone, iPad and MacBooks, in a single device. Announced in June, the company said the security keys would cater to cross-platform users — particularly Apple device owners.

These security keys are small enough to sit on a keyring. When you want to log in to an online account, you plug in the key to your device and it authenticates you. Your Gmail, Twitter and Facebook account all support these plug-in devices as a second-factor of authentication after your username and password — a far stronger mechanism than the simple code sent to your phone.

Security keys offer almost unbeatable security and can protect against a variety of threats, including nation-state attackers.

Jerrod Chong, Yubico’s chief solutions officer, said the new key would fill a “critical gap in the mobile authentication ecosystem,” particularly given how users are increasingly spending their time across a multitude of mobile devices.

The new key works with a range of apps, including password managers like 1Password and LastPass, and web browsers like Brave, which support security key authentication.

Powered by WPeMatico

Google opens its Android security-key tech to iPhone and iPad users

Posted by | Android, authentication, computer security, cryptography, Google, iPad, multi-factor authentication, Security, security token | No Comments

Google will now allow iPhone and iPad owners to use their Android security key to verify sign-ins, the company said Wednesday.

Last month, the search and mobile giant said it developed a new Bluetooth-based protocol that will allow modern Android 7.0 devices and later to act as a security key for two-factor authentication. Since then, Google said 100,000 users are already using their Android phones as a security key.

Since its debut, the technology was limited to Chrome sign-ins. Now Google says Apple device owners can get the same protections without having to plug anything in.

Signing in to a Google account on an iPad using an Android 7.0 device (Image: Google)

Security keys are an important security step for users who are particularly at risk of advanced attacks. They’re designed to thwart even the smartest and most resourceful attackers, like nation-state hackers. Instead of a security key that you keep on your key ring, newer Android devices have the technology built-in. When you log in to your account, you are prompted to authenticate with your key. Even if someone steals your password, they can’t log in without your authenticating device. Even phishing pages won’t work because only legitimate websites support security keys.

For the most part, security keys are a last line of defense. Google admitted last month that its standalone Titan security keys were vulnerable to a pairing bug, potentially putting it at risk of hijack. The company offered a free replacement for any affected device.

The security key technology is also FIDO2 compliant, a secure and flexible standard that allows various devices running different operating systems to communicate with each other for authentication.

For the Android security key to work, iPhone and iPad users need the Google Smart Lock app installed. For now, Google said the Android security key will be limited to sign-ins to Google accounts only.

Powered by WPeMatico