ceo

Not to be overshadowed by the Apple Watch, AliveCor announces a new 6-lead ECG reader

Posted by | AliveCor, Apple, Apple Watch, Atrial fibrillation, ceo, fda, Gadgets, Health, heart attack, medicine, TC, Vic Gundotra, Wearables | No Comments

Apple’s announcement last week of a Watch with an FDA-approved ECG reader to track heart health looked to be the undoing of original ECG reader company AliveCor. But, to prove it still has a hearty pulse, AliveCor tells TechCrunch it is coming out with a “never-before-seen” 6-lead electrocardiogram (ECG), pending FDA approval.

In a care clinic, a patient typically has 12 leads, or stickers placed across their chest to pick up data from their heart. However, other ECG readers typically have one or two leads. The Apple Watch places a single lead system on the wrist. The 6-lead ECG reader is, in theory, more accurate because there are more sensors picking up more information, which could be critical in saving lives.

AliveCor’s and the Apple Watch’s current function is to pick up AFib — or the detection of an irregular heart beat. AliveCor announced earlier this month it had received FDA-approval to use its ECG readers to detect a rare but dangerous blood condition called hyperkalemia.

With 6-lead ECG readers, the AliveCor device could also pick up about 100 different diseases, according to CEO Vic Gundotra, who rattled off a bunch of long-worded maladies I can’t even begin to pronounce but he’s hoping his reader will soon be able to detect.

However, one important detection would be ST elevation — one of the key factors associated with the onset of a heart attack and which could get a person on their way to the hospital before they start displaying other physical symptoms.

Of course, Apple — which already holds 17 percent of the wearables market — could easily decide it, too, needs to add a 6-lead ECG reader to the Watch and beat AliveCor’s market yet again. But Gundotra shrugs at that suggestion.

“They could but we have some pretty good patents in the space,” he told TechCrunch, adding “Apple has done me a great service, actually. We’re a small company but you are talking to me, calling about this [because of their announcement].”

No formal name has been announced yet for the 6-lead product, but AliveCor will be working with the FDA on the regulatory pathway for it and hopes to bring it to over-the-counter consumers by 2019.

Powered by WPeMatico

With a $10 million round, Nigeria’s Paga plans global expansion

Posted by | africa, alipay, Android, Bank, bank transfers, california, cellulant, ceo, Column, e-commerce, economy, ethiopia, Finance, kenya, M-Pesa, Mexico, mobile devices, mobile payment, money, Nigeria, Omidyar Network, online payments, p2p, PayPal, Philippines, Safaricom, San Francisco, Spotify, Sweden, Uber, vodafone, western union | No Comments
Jake Bright
Contributor

Jake Bright is a writer and author in New York City. He is co-author of The Next Africa.

Nigerian digital payments startup Paga is gearing up for an international expansion with $10 million in funding let by the Global Innovation Fund. 

The company is planning to release its payments product in Ethiopia, Mexico, and the Philippines—CEO Tayo Oviosu told TechCrunch at Disrupt San Francisco.

Paga looks to go head to head with regional and global payment players, such as PayPal, Alipay, and Safaricom’s M-Pesa, according to Oviosu.

“We are not only in a position to compete with them, we’re going beyond them,” he  said of Kenya’s M-Pesa mobile money product. “Our goal is to build a global payment ecosystem across many emerging markets.”

Founded in 2012, Paga has created a multi-channel network and platform to transfer money, pay-bills, and buy things digitally that’s already serving 9 million customers in Nigeria—including 6000 businesses. All of whom can drop into one of Paga’s 17,167 agents or transfer funds from one of Paga’s mobile apps.

Paga products work on iOS, Android, and basic USSD phones using a star, hashtag option. The company has remittance partnerships with the likes of Western Union and Moneytrans and allows for third-party integration of its app.

Paga has also built out considerable scale in home market Nigeria—which boasts the dual distinction as Africa’s most populous nation and largest economy.

Since inception, the startup has processed 57 million transactions worth $3.6 billion, according to Oviosu.

That’s no small feat given the country straddles the challenges and opportunities of growing digital payments. Only recently did Nigeria’s mobile and internet penetration break 50 percent and 40 percent of the country’s 196 million remain unbanked.

To bring more of Nigeria’s masses onto digital commerce, Paga recently launched a new money transfer-app that further simplifies the P2P payment process from mobile devices.

For nearly a decade, Kenya’s M-Pesa—which has 20 million active users and operates abroad—has dominated discussions of mobile money in Africa.

Paga and a growing field of operators are diversifying the continent’s payment playing field.

Fintech company Cellulant raised $47 million in 2019 on its business of processing $350 million in payment transactions across 33 African countries.

In Nigeria, payment infrastructure company Interswitch has expanded across borders and is pursuing an IPO. And Nigerian payment gateway startups Paystack and Flutterwave have digitized volumes of B2B transactions while gaining global investment.

So why does Paga—a Nigerian payments company—believe it can expand its digital payments business abroad?

“Why not us?,” said CEO Oviosu. “People sit in California and listen to Spotify that was developed in Sweden. And Uber started somewhere before going to different countries and figuring out local markets,” he added.

“The team behind this business has worked globally for some of the top tech names. This platform can stand shoulder to shoulder with any payments company built somewhere else,” he said.

On that platform, Oviosu underscores it has positioned itself as a partner, not a rival, to traditional banks. “Our ecosystem is not built to compete with you, it’s actually complimentary to you,” he said of the company’s positioning to big banks—enabling Paga to partner with seven banks in Nigeria.

Paga also sees potential to adapt its model to other regulatory and consumer environments. “We’ve built an infrastructure that rides across all mobile networks,” said Oviosu. “We’re not trying to be a bank. Paga wants to work with the banks and financial institutions to enable a billion people to access and use money,” he said.

As part of the $10 million round (which brings Paga’s total funding up to $35 million), Global Innovation Partners will take a board seat. Other round participants include Goodwell, Adlevo Capital, Omidyar Network, and Unreasonable Capital.

Paga will use the Series B2 to grow its core development team of 25 engineers across countries and continents. It will also continue its due diligence on global expansion—though no hard dates have been announced.

On revenues, Paga makes money on merchant payments, bank to bank transfers, and selling airtime and data. “As we roll out other services, we will build a model where we will make money on savings and lending,” said the company’s CEO.

As for profitability, Paga does not release financials, but reached profitability in 2018, according to Oviosu—something that was confirmed in the due diligence process with round investors.

On the possibility of beating Interswitch (or another venerable startup) to become Africa’s first big tech IPO, Oviosu plays that down. “For the next 3-5 years I see us staying private,” he said.

Powered by WPeMatico

XYZPrinting announces the da Vinci Color Mini

Posted by | 3d printing, ceo, equipment, Gadgets, indiegogo, industrial design, office equipment, printer, printing, TC, xyzprinting | No Comments

XYZPrinting may have finally cracked the color 3D printing code. Their latest machine, the $1,599 da Vinci Color Mini is a full color printer that uses three CMY ink cartridges to stain the filament as it is extruded, allowing for up to 15 million color combinations.

The printer is currently available for pre-order on Indiegogo for $999.

The printer can build objects 5.1″ x 5.1″ x 5.1″ in size and it can print PLA or PETG. A small ink cartridge stains the 3D Color-inkjet PLA as it comes out, creating truly colorful objects.

“Desktop full-color 3D printing is here. Now, consumers can purchase an easy-to-operate, affordable, compact full-color 3D printer for $30,000 less than market rate. This is revolutionary because we are giving the public access to technology that was once only available to industry professionals,” said Simon Shen, CEO of XYZprinting.

The new system is aimed at educational and home markets and, at less than a $1,000, it hits a unique and important sweet spot in terms of price. While the prints aren’t perfect, being able to print in full color for the price of a nicer single color 3D printer is pretty impressive.

Powered by WPeMatico

Miles gives you reward miles for almost everything

Posted by | Android, Apps, ceo, customer relationship management, fontinalis partners, Gabe Klein, gas station, Gautam Gupta, iOS App Store, Jigar Shah, Keith Teare, loyalty program, Lyft, Marketing, miles, NatureBox, Porsche, Pricing, san jose, Silvercar, starbucks, Startups, TC, Uber, whole foods | No Comments

Reward miles are nice if you fly a lot but what if you bike or take Lyfts or just like to wander around town? A new app called Miles aims to give you rewards for all of those things, bringing the concept of rewards out of the air and onto the ground.

Miles, co-founded by Jigar Shah, Paresh Jain and Parin Shah, is a San Jose-based company that looked at the problem of reward miles outside of airlines as well as the problems associated with city planning and traffic data generation. The app, which is now in the iOS App Store, can see when you walk, ride a bike, take the bus, drive yourself, or even hop in a Lyft or an Uber. It then rewards you on a sliding scale depending on how eco-friendly your trip is. Biking, for example, is worth more than driving or even taking the bus.

“Mobility today is a universal behavior that goes largely unrewarded,” said Jigar Shah. “To date, travel rewards have been siloed and limited to one form of travel – with consumers facing exclusions when comes to earning and redeeming rewards. Miles solves for this gap in market by allowing anyone to earn rewards – simply by traveling and commuting how they do every day.”

What can you get with your miles? Just for signing up you can get 2,000 miles which is enough for a $5 Starbucks, Target, or Whole Foods gift card, among others. There are also “nearby” that bring up deals from merchants in your area but right now most of the deals are online. More miles gets you better deals.

“In contrast to rewards programs in the market today, Miles delivers value for every mile traveled, across every mode of travel, anywhere in the world. Whether by car (as a driver, passenger or rideshare), plane, train, subway, bus, boat, bicycle, or on foot, the Miles app effortlessly awards users’ travel – regardless of where their journey takes them. Miles can be saved or redeemed at any time – with the value increasing every month as more merchants accept them as a form of payment,” said Shah.

Because the app tracks your movement on multiple types of transport the Miles team foresees connecting with city governments to supply traffic and usage data for various forms of transport. Further, because miles can be redeemed locally, they could also increase foot traffic.

The company raised $3 million from Porsche Digital, Scrum Ventures, and others. Former TechCruncher Keith Teare also worked with the team on the raise.

Interestingly, the platform can also work to create predictive recommendations based on your position and past likes and dislikes.

“By leveraging the Miles’ predictive AI platform, business and brands can deliver value to customers by offering to meet their near future needs as they travel, such as when someone needs a meal, a fill-up at the gas station, or a ride,” wrote the team. “Annoying marketing can become true customer service by enabling hyper-targeted rewards related to immediate need. This not only leads to increased customer loyalty and repeat visits, but also increased sales.”

“We saw an opportunity to deliver more value to people as transportation continued to evolve,” said Shah.

Multiple city governments are looking to implement the technology locally and the Contra Costa Transportation Authority will “offer rewards as an incentive to promote alternative and sustainable mode of transportation through the Miles platform.” Seattle is next and maybe some day soon you’ll be earning miles for walking and driving in your home town. At least it will get us out of the house.

Powered by WPeMatico

InVision mobile app updates include studio features and desktop to mobile mirroring

Posted by | Adobe, adobe systems, Android, app-store, apple inc, Atlassian, ceo, clark valberg, designer, InVision, inVisionApp, mobile app, mobile device, mobile devices, Software, TC | No Comments

InVision, the software a service challenger to Adobe’s design dominance, has just released a new version of its mobile app for iOS and is beta-testing new features for Android users as it tries to bring additional functionality to designers on-the-go.

The new app tools feature “studio mirroring” for reviews of new designs directly on mobile devices, so that designers can see design changes to applications made on the desktop display on mobile in real time.

The mirroring feature works by scanning a QR code on a mobile device which lets users view design changes and test user experiences immediately.

The company is also bringing its Freehand support — which allows for collaborative commenting on design prototypes to tablets so teams can comment on the fly, the company said.

The tools will give InVision another arrow in its quiver as it tries to take on other design platforms (notably the 100 pound gorilla known as Adobe) and are a useful addition to a service that’s trying to woo the notoriously fickle design community with an entire toolkit.

As we wrote in May when the company launched its app store:

While collaboration is the bread and butter of InVision’s business, and the only revenue stream for the company, CEO and founder Clark Valberg feels that it isn’t enough to be complementary to the current design tool ecosystem. Which is why InVision launched Studio in late 2017, hoping to take on Adobe and Sketch head-on with its own design tool.

Studio differentiates itself by focusing on the designer’s real-life workflow, which often involves mocking up designs in one app, pulling assets from another, working on animations and transitions in another, and then stitching the whole thing together to share for collaboration across InVision Cloud. Studio aims to bring all those various services into a single product, and a critical piece of that mission is building out an app store and asset store with the services too sticky for InVision to rebuild from Scratch, such as Slack or Atlassian .

Powered by WPeMatico

Original Stitch’s new Bodygram will measure your body

Posted by | Android, Apps, bra size, ceo, Clothing, machine learning, Mobile, online shopping, original stitch, sewing, shapescale, shirt, Startups, TC | No Comments

After years of teasing, Original Stitch has officially launched their Bodygram service and will be rolling it out this summer. The system can scan your body based on front and side photos and will create custom shirts with your precise measurements.

“Bodygram gives you full body measurements as accurate as taken by professional tailors from just two photos on your phone. Simply take a front photo and a side photo and upload to our cloud and you will receive a push notification within minutes when your Bodygram sizing report is ready,” said CEO Jin Koh. “In the sizing report you will find your full body measurements including neck, sleeve, shoulder, chest, waist, hip, etc. Bodygram is capable of producing sizing result within 99 percent accuracy compared to professional human tailors.”

The technology is a clever solution to the biggest problem in custom clothing: fit. While it’s great to find a service that will tailor your clothing based on your measurements, often these measurements are slightly off and can affect the cut of the shirt or pants. Right now, Koh said, his team offers free returns if the custom shirts don’t fit.

Further, the technology is brand new and avoids many of the pitfalls of the original body-scanning tech. For example, Bodygram doesn’t require you to get into a Spandex onesie like most systems do and it can capture 40 measurements with only two full-body photos.

“Bodygram is the first sizing technology that works on your phone capable of giving you highly accurate sizing result from just two photos with you wearing normal clothing on any background,” said Koh. “Legacy technologies on the market today require you to wear a very tight-fitting spandex suit, take 360 photos of you and require a plain background to work. Other technologies give you accuracy with five inches deviation in accuracy while Bodygram is the first technology to give you sub-one-inch accuracy. We are the first to use both computer vision and machine learning techniques to solve the problem of predicting your body shape underneath the clothes. Once we predicted your body shape we wrote our proprietary algorithm to calculate the circumferences and the length for each part of the body.”

Koh hopes the technology will reduce returns.

“It’s not uncommon to see clothing return rates reaching in the 40-50 percent range,” he said. “Apparel clothing sales is among the lowest penetration in online shopping.”

The system also can be used to measure your body over time in order to collect health and weight data as well as help other manufacturers produce products that fit you perfectly. The app will launch this summer on Android and iOS. The company will be licensing the technology to other providers that will be able to create custom fits based on just a few side and front photos. Sales at the company grew 175 percent this year and they now have 350,000 buyers that are already creating custom shirts.

A number of competitors are in this interesting space, most notably ShapeScale, a company that appeared at TechCrunch Disrupt and promised a full body scan using a robotic scale. This, however, is the first commercial use of standard photos to measure your appendages and thorax and it’s an impressive step forward in the world of custom clothing.

Powered by WPeMatico

Apple is rebuilding Maps from the ground up

Posted by | Apple, apple inc, Apple Maps, artificial intelligence, california, ceo, Chevron, computing, driver, eddy cue, Google, gps, iPad, iPhone, Japan, journalist, location services, mac pro, machine learning, Mobile, mobile devices, OpenStreetMap, San Francisco, satellite imagery, smartphones, Software, TC, technology, TomTom, United States, vp | No Comments

I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it needs to be to be considered a world-class service.

Maps needs fixing.

Apple, it turns out, is aware of this, so it’s re-building the maps part of Maps.

It’s doing this by using first-party data gathered by iPhones with a privacy-first methodology and its own fleet of cars packed with sensors and cameras. The new product will launch in San Francisco and the Bay Area with the next iOS 12 beta and will cover Northern California by fall.

Every version of iOS will get the updated maps eventually, and they will be more responsive to changes in roadways and construction, more visually rich depending on the specific context they’re viewed in and feature more detailed ground cover, foliage, pools, pedestrian pathways and more.

This is nothing less than a full re-set of Maps and it’s been four years in the making, which is when Apple began to develop its new data-gathering systems. Eventually, Apple will no longer rely on third-party data to provide the basis for its maps, which has been one of its major pitfalls from the beginning.

“Since we introduced this six years ago — we won’t rehash all the issues we’ve had when we introduced it — we’ve done a huge investment in getting the map up to par,” says Apple SVP Eddy Cue, who now owns Maps, in an interview last week. “When we launched, a lot of it was all about directions and getting to a certain place. Finding the place and getting directions to that place. We’ve done a huge investment of making millions of changes, adding millions of locations, updating the map and changing the map more frequently. All of those things over the past six years.”

But, Cue says, Apple has room to improve on the quality of Maps, something that most users would agree on, even with recent advancements.

“We wanted to take this to the next level,” says Cue. “We have been working on trying to create what we hope is going to be the best map app in the world, taking it to the next step. That is building all of our own map data from the ground up.”

In addition to Cue, I spoke to Apple VP Patrice Gautier and more than a dozen Apple Maps team members at its mapping headquarters in California this week about its efforts to re-build Maps, and to do it in a way that aligned with Apple’s very public stance on user privacy.

If, like me, you’re wondering whether Apple thought of building its own maps from scratch before it launched Maps, the answer is yes. At the time, there was a choice to be made about whether or not it wanted to be in the business of maps at all. Given that the future of mobile devices was becoming very clear, it knew that mapping would be at the core of nearly every aspect of its devices, from photos to directions to location services provided to apps. Decision made, Apple plowed ahead, building a product that relied on a patchwork of data from partners like TomTom, OpenStreetMap and other geo data brokers. The result was underwhelming.

Almost immediately after Apple launched Maps, it realized that it was going to need help and it signed on a bunch of additional data providers to fill the gaps in location, base map, point-of-interest and business data.

It wasn’t enough.

“We decided to do this just over four years ago. We said, ‘Where do we want to take Maps? What are the things that we want to do in Maps?’ We realized that, given what we wanted to do and where we wanted to take it, we needed to do this ourselves,” says Cue.

Because Maps are so core to so many functions, success wasn’t tied to just one function. Maps needed to be great at transit, driving and walking — but also as a utility used by apps for location services and other functions.

Cue says that Apple needed to own all of the data that goes into making a map, and to control it from a quality as well as a privacy perspective.

There’s also the matter of corrections, updates and changes entering a long loop of submission to validation to update when you’re dealing with external partners. The Maps team would have to be able to correct roads, pathways and other updating features in days or less, not months. Not to mention the potential competitive advantages it could gain from building and updating traffic data from hundreds of millions of iPhones, rather than relying on partner data.

Cue points to the proliferation of devices running iOS, now over a billion, as a deciding factor to shift its process.

“We felt like because the shift to devices had happened — building a map today in the way that we were traditionally doing it, the way that it was being done — we could improve things significantly, and improve them in different ways,” he says. “One is more accuracy. Two is being able to update the map faster based on the data and the things that we’re seeing, as opposed to driving again or getting the information where the customer’s proactively telling us. What if we could actually see it before all of those things?”

I query him on the rapidity of Maps updates, and whether this new map philosophy means faster changes for users.

“The truth is that Maps needs to be [updated more], and even are today,” says Cue. “We’ll be doing this even more with our new maps, [with] the ability to change the map in real time and often. We do that every day today. This is expanding us to allow us to do it across everything in the map. Today, there’s certain things that take longer to change.

“For example, a road network is something that takes a much longer time to change currently. In the new map infrastructure, we can change that relatively quickly. If a new road opens up, immediately we can see that and make that change very, very quickly around it. It’s much, much more rapid to do changes in the new map environment.”

So a new effort was created to begin generating its own base maps, the very lowest building block of any really good mapping system. After that, Apple would begin layering on living location data, high-resolution satellite imagery and brand new intensely high-resolution image data gathered from its ground cars until it had what it felt was a “best in class” mapping product.

There is only really one big company on earth that owns an entire map stack from the ground up: Google .

Apple knew it needed to be the other one. Enter the vans.

Apple vans spotted

Though the overall project started earlier, the first glimpse most folks had of Apple’s renewed efforts to build the best Maps product was the vans that started appearing on the roads in 2015 with “Apple Maps” signs on the side. Capped with sensors and cameras, these vans popped up in various cities and sparked rampant discussion and speculation.

The new Apple Maps will be the first time the data collected by these vans is actually used to construct and inform its maps. This is their coming out party.

Some people have commented that Apple’s rigs look more robust than the simple GPS + Camera arrangements on other mapping vehicles — going so far as to say they look more along the lines of something that could be used in autonomous vehicle training.

Apple isn’t commenting on autonomous vehicles, but there’s a reason the arrays look more advanced: they are.

Earlier this week I took a ride in one of the vans as it ran a sample route to gather the kind of data that would go into building the new maps. Here’s what’s inside.

In addition to a beefed-up GPS rig on the roof, four LiDAR arrays mounted at the corners and eight cameras shooting overlapping high-resolution images, there’s also the standard physical measuring tool attached to a rear wheel that allows for precise tracking of distance and image capture. In the rear there is a surprising lack of bulky equipment. Instead, it’s a straightforward Mac Pro bolted to the floor, attached to an array of solid state drives for storage. A single USB cable routes up to the dashboard where the actual mapping-capture software runs on an iPad.

While mapping, a driver…drives, while an operator takes care of the route, ensuring that a coverage area that has been assigned is fully driven, as well as monitoring image capture. Each drive captures thousands of images as well as a full point cloud (a 3D map of space defined by dots that represent surfaces) and GPS data. I later got to view the raw data presented in 3D and it absolutely looks like the quality of data you would need to begin training autonomous vehicles.

More on why Apple needs this level of data detail later.

When the images and data are captured, they are then encrypted on the fly and recorded on to the SSDs. Once full, the SSDs are pulled out, replaced and packed into a case, which is delivered to Apple’s data center, where a suite of software eliminates from the images private information like faces, license plates and other info. From the moment of capture to the moment they’re sanitized, they are encrypted with one key in the van and the other key in the data center. Technicians and software that are part of its mapping efforts down the pipeline from there never see unsanitized data.

This is just one element of Apple’s focus on the privacy of the data it is utilizing in New Maps.

Probe data and privacy

Throughout every conversation I have with any member of the team throughout the day, privacy is brought up, emphasized. This is obviously by design, as Apple wants to impress upon me as a journalist that it’s taking this very seriously indeed, but it doesn’t change the fact that it’s evidently built in from the ground up and I could not find a false note in any of the technical claims or the conversations I had.

Indeed, from the data security folks to the people whose job it is to actually make the maps work well, the constant refrain is that Apple does not feel that it is being held back in any way by not hoovering every piece of customer-rich data it can, storing and parsing it.

The consistent message is that the team feels it can deliver a high-quality navigation, location and mapping product without the directly personal data used by other platforms.

“We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it — in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”

The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data or requests sent to Apple and it augments the “ground truth” data provided by its own mapping vehicles with this “probe data” sent back from iPhones.

Because only random segments of any person’s drive is ever sent and that data is completely anonymized, there is never a way to tell if any trip was ever a single individual. The local system signs the IDs and only it knows to whom that ID refers. Apple is working very hard here to not know anything about its users. This kind of privacy can’t be added on at the end, it has to be woven in at the ground level.

Because Apple’s business model does not rely on it serving to you, say, an ad for a Chevron on your route, it doesn’t need to even tie advertising identifiers to users.

Any personalization or Siri requests are all handled on-board by the iOS device’s processor. So if you get a drive notification that tells you it’s time to leave for your commute, that’s learned, remembered and delivered locally, not from Apple’s servers.

That’s not new, but it’s important to note given the new thing to take away here: Apple is flipping on the power of having millions of iPhones passively and actively improving their mapping data in real time.

In short: Traffic, real-time road conditions, road systems, new construction and changes in pedestrian walkways are about to get a lot better in Apple Maps.

The secret sauce here is what Apple calls probe data. Essentially little slices of vector data that represent direction and speed transmitted back to Apple completely anonymized with no way to tie it to a specific user or even any given trip. It’s reaching in and sipping a tiny amount of data from millions of users instead, giving it a holistic, real-time picture without compromising user privacy.

If you’re driving, walking or cycling, your iPhone can already tell this. Now if it knows you’re driving, it also can send relevant traffic and routing data in these anonymous slivers to improve the entire service. This only happens if your Maps app has been active, say you check the map, look for directions, etc. If you’re actively using your GPS for walking or driving, then the updates are more precise and can help with walking improvements like charting new pedestrian paths through parks — building out the map’s overall quality.

All of this, of course, is governed by whether you opted into location services, and can be toggled off using the maps location toggle in the Privacy section of settings.

Apple says that this will have a near zero effect on battery life or data usage, because you’re already using the ‘maps’ features when any probe data is shared and it’s a fraction of what power is being drawn by those activities.

From the point cloud on up

But maps cannot live on ground truth and mobile data alone. Apple is also gathering new high-resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.

After the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest. These are cross referenced to publicly available data like addresses held by the city and new construction of neighborhoods or roadways that comes from city planning departments.

But one of the special sauce bits that Apple is adding to the mix of mapping tools is a full-on point cloud that maps in 3D the world around the mapping van. This allows them all kinds of opportunities to better understand what items are street signs (retro-reflective rectangular object about 15 feet off the ground? Probably a street sign) or stop signs or speed limit signs.

It seems like it also could enable positioning of navigation arrows in 3D space for AR navigation, but Apple declined to comment on “any future plans” for such things.

Apple also uses semantic segmentation and Deep Lambertian Networks to analyze the point cloud coupled with the image data captured by the car and from high-resolution satellites in sync. This allows 3D identification of objects, signs, lanes of traffic and buildings and separation into categories that can be highlighted for easy discovery.

The coupling of high-resolution image data from car and satellite, plus a 3D point cloud, results in Apple now being able to produce full orthogonal reconstructions of city streets with textures in place. This is massively higher-resolution and easier to see, visually. And it’s synchronized with the “panoramic” images from the car, the satellite view and the raw data. These techniques are used in self-driving applications because they provide a really holistic view of what’s going on around the car. But the ortho view can do even more for human viewers of the data by allowing them to “see” through brush or tree cover that would normally obscure roads, buildings and addresses.

This is hugely important when it comes to the next step in Apple’s battle for supremely accurate and useful Maps: human editors.

Apple has had a team of tool builders working specifically on a toolkit that can be used by human editors to vet and parse data, street by street. The editor’s suite includes tools that allow human editors to assign specific geometries to flyover buildings (think Salesforce tower’s unique ridged dome) that allow them to be instantly recognizable. It lets editors look at real images of street signs shot by the car right next to 3D reconstructions of the scene and computer vision detection of the same signs, instantly recognizing them as accurate or not.

Another tool corrects addresses, letting an editor quickly move an address to the center of a building, determine whether they’re misplaced and shift them around. It also allows for access points to be set, making Apple Maps smarter about the “last 50 feet” of your journey. You’ve made it to the building, but what street is the entrance actually on? And how do you get into the driveway? With a couple of clicks, an editor can make that permanently visible.

“When we take you to a business and that business exists, we think the precision of where we’re taking you to, from being in the right building,” says Cue. “When you look at places like San Francisco or big cities from that standpoint, you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.”

Water, swimming pools (new to Maps entirely), sporting areas and vegetation are now more prominent and fleshed out thanks to new computer vision and satellite imagery applications. So Apple had to build editing tools for those, as well.

Many hundreds of editors will be using these tools, in addition to the thousands of employees Apple already has working on maps, but the tools had to be built first, now that Apple is no longer relying on third parties to vet and correct issues.

And the team also had to build computer vision and machine learning tools that allow it to determine whether there are issues to be found at all.

Anonymous probe data from iPhones, visualized, looks like thousands of dots, ebbing and flowing across a web of streets and walkways, like a luminescent web of color. At first, chaos. Then, patterns emerge. A street opens for business, and nearby vessels pump orange blood into the new artery. A flag is triggered and an editor looks to see if a new road needs a name assigned.

A new intersection is added to the web and an editor is flagged to make sure that the left turn lanes connect correctly across the overlapping layers of directional traffic. This has the added benefit of massively improved lane guidance in the new Apple Maps.

Apple is counting on this combination of human and AI flagging to allow editors to first craft base maps and then also maintain them as the ever-changing biomass wreaks havoc on roadways, addresses and the occasional park.

Here there be Helvetica

Apple’s new Maps, like many other digital maps, display vastly differently depending on scale. If you’re zoomed out, you get less detail. If you zoom in, you get more. But Apple has a team of cartographers on staff that work on more cultural, regional and artistic levels to ensure that its Maps are readable, recognizable and useful.

These teams have goals that are at once concrete and a bit out there — in the best traditions of Apple pursuits that intersect the technical with the artistic.

The maps need to be usable, but they also need to fulfill cognitive goals on cultural levels that go beyond what any given user might know they need. For instance, in the U.S., it is very common to have maps that have a relatively low level of detail even at a medium zoom. In Japan, however, the maps are absolutely packed with details at the same zoom, because that increased information density is what is expected by users.

This is the department of details. They’ve reconstructed replicas of hundreds of actual road signs to make sure that the shield on your navigation screen matches the one you’re seeing on the highway road sign. When it comes to public transport, Apple licensed all of the type faces that you see on your favorite subway systems, like Helvetica for NYC. And the line numbers are in the exact same order that you’re going to see them on the platform signs.

It’s all about reducing the cognitive load that it takes to translate the physical world you have to navigate into the digital world represented by Maps.

Bottom line

The new version of Apple Maps will be in preview next week with just the Bay Area of California going live. It will be stitched seamlessly into the “current” version of Maps, but the difference in quality level should be immediately visible based on what I’ve seen so far.

Better road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover, including grass and trees, represented on the map, as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through.

Search is also being revamped to make sure that you get more relevant results (on the correct continents) than ever before. Navigation, especially pedestrian guidance, also gets a big boost. Parking areas and building details to get you the last few feet to your destination are included, as well.

What you won’t see, for now, is a full visual redesign.

“You’re not going to see huge design changes on the maps,” says Cue. “We don’t want to combine those two things at the same time because it would cause a lot of confusion.”

Apple Maps is getting the long-awaited attention it really deserves. By taking ownership of the project fully, Apple is committing itself to actually creating the map that users expected of it from the beginning. It’s been a lingering shadow on iPhones, especially, where alternatives like Google Maps have offered more robust feature sets that are so easy to compare against the native app but impossible to access at the deep system level.

The argument has been made ad nauseam, but it’s worth saying again that if Apple thinks that mapping is important enough to own, it should own it. And that’s what it’s trying to do now.

“We don’t think there’s anybody doing this level of work that we’re doing,” adds Cue. “We haven’t announced this. We haven’t told anybody about this. It’s one of those things that we’ve been able to keep pretty much a secret. Nobody really knows about it. We’re excited to get it out there. Over the next year, we’ll be rolling it out, section by section in the U.S.”

Powered by WPeMatico

PUBG juggernaut hits 400 million users, and for a limited time, players can get the PC version for $19.99

Posted by | Android, battle royale, bluehole, ceo, computing, epic games, fortnite, Gaming, player, Software, TC | No Comments

Player Unknown’s Battlegrounds, the progenitor and once-reigning champion of last-player-standing battle royale gaming that’s swept the video game world by storm, has hit over 400 million players globally across all platforms.

As a perk and potential sop to bring new players to its personal computing platform, PUBG is offering the full version of its full-throttle game for $19.99 — a 33.33 percent cut from the game’s regular price.

The offer includes classic maps Erangel and Miramar and the all-new Sanhok, launching on June 22, according to a statement from the company.

PUBG has already moved 50 million units of its game across PC and Xbox One consoles and has hit 87 million daily players. Roughly 227 million players engage in PUBG’s particular murder-death-kill competition every month.

“We are genuinely humbled by the ongoing success and growth of PUBG,” said CH Kim, CEO, PUBG Corp. “We are not resting on our laurels though, as we continue to focus on performance and content updates for current players to enjoy, and look to our future as we aspire to deliver the signature PUBG experience to fans worldwide.”

While PUBG’s rise has been swift, hitting the 400 million figure in a little over six months since its worldwide release (and over 15 months since its early access release), the game’s publisher has been beset with competitors nipping at its heels.

Already, the game has been toppled from the top slot by the new player on the battle royale block — Fortnite.

In April alone, Fortnite pulled in $296 million for its own last-avatar-standing game — and the game’s popularity likely will only grow once the title takes its bow on the Android gaming platform later this month.

PUBG, the company, and its parent company, Bluehole, aren’t taking the competition lying down. They’ve taken Fortnite’s creators to court, filing a suit against Epic Games over copyright infringement concerns. As we reported earlier, the South Korean suit, noted by The Korea Times, takes particular issue with Fortnite’s battle royale mode.

PUBG leadership declined to comment on the lawsuit.

Powered by WPeMatico

With loans of just $10, this startup has built a financial services powerhouse in emerging markets

Posted by | africa, Android, bangalore, california, ceo, Collaborative Fund, data collective, Director, Entrepreneur, Female Founders Fund, Finance, financial services, google ventures, india, IVP, jules maltz, Kabbage, kenya, Lowercase Capital, managing partner, Manila, Mexico, mexico city, Mumbai, Nairobi, Philippines, Revolution Growth, revolution ventures, Ribbit Capital, santa monica, shilling, steve case, tala, Tanzania, TC, United Nations Population Fund | No Comments

Peris Kimeli and Betsy Cheruyot were students at Kenyatta University thinking about launching a business when they applied for their first loans from the mobile lending company, Tala.

Hoping to get a clothing business off the ground and make some money to live on while going to school, the two young Kenyans downloaded the Tala mobile app, and within minutes received loans totaling about $15.

“Between us and poverty, we had about 200 shillings,” Kimeli said of her early days starting their business. “We were like, what are we going to eat? Our parents said, ‘No. We’re not going to send money… You go figure it out’ So we went and we did that.”

Kimeli and Cheruyot took that $15 loan and went to Nairobi’s famous secondhand market, Gikomba, where they bought 15 dresses at 100 shillings each and resold them in dorms and hostels for 200 shillings.

“Two remained, but we had no problem — since we could keep them, we could wear them. By the end of the month, we had 7000 [shillings],” Kimeli said. “We borrowed again — this time we borrowed 3000 [shillings] — we went out and bought some more dresses, and that’s how we’ve been.”

Peris Kimeli and Betsy Cheruyot in Nairobi. Photo courtesy of Tala

Similar stories are playing out in cities across the world — in countries like India, Mexico, the Philippines and Tanzania — all because of Tala, a young, Santa Monica, Calif.-based, financial services startup.

Now in its fourth year, Tala has already distributed around $300 million in loans to 1.3 million borrowers like Kimeli. The company plans to continue expanding its geographical reach and range of financial services, thanks in part to $65 million in new financing from billionaire backed investment funds like Steve Case’s Revolution Growth fund.

“We see Tala as a company building the future of finance. They have quickly become one of the leading mobile-first lenders in emerging markets where well over 3 billion consumers do not have access to traditional banks,” says Case.

Shivani Siroya, the founder and chief executive officer at Tala, knows just how important — and transformational — outside investment can be for individuals in emerging markets.

Siroya was introduced to the power of financial independence working with the United Nations Population Fund.

“I ended up interviewing 3500 people, in person, across nine different countries,” Siroya says. “What I did was go to their homes with them. Walk with them to work and sit there in the back of their stores and tally how many customers came in and how many products they sold. How much money goes under the mattress and how much oney goes to allowances… These individuals are hard-working and they are credit worthy, but you couldn’t lend to them because they couldn’t be documented.”

Siroya launched Tala in March 2014 to create a mechanism for providing credit scores to financial institutions so that these undocumented women could get the loans they needed to become financially independent and entrepreneurial, she says. What Tala’s founder quickly realized was that the easiest way to create credit scores that other financial institutions would recognize would be for Tala to start issuing loans itself.

The app — available for download on Android devices — works by collecting data on texts and calls, merchant transactions, overall app usage, and personal identifiers on a mobile phone to create an instantaneous profile of its potential borrowers. Customers simply download the app, apply for a loan and receive a decision in seconds. Most Tala borrowers, actually receive their credit in less than 10 minutes.

Shivani Siroya (Tala CEO) at TechCrunch Disrupt NY 2017

Siroya started Tala’s lending in Kenya — in part because of the robust mobile payment infrastructure that exists in the country — before eventually expanding to the Philippines and then Tanzania. By the end of last year Tala had added operations in Mexico and India to span more geographies than any of the other unsecured mobile lenders in the market. The company boasts 215 employees across offices in Santa Monica, Nairobi, Dar Es Salaam, Manila, Mexico City, Mumbai, and Bangalore. 

Tala typically lends around $70 to its borrowers, but loans range from $10 on the low end to $500 at the high end. “The point of credit is leveraging your income to improve your quality of life,” Siroya says. Lower loan sizes could mean a product that’s geared more towards consumption than towards leveraging a product to invest for economic stability, she says.

“We want to start at $10, because we realize that 70% of our customers are using this for working capital. They’re small business owners. That’s really the gap in the market,” says Siroya.

Tala’s borrowers are usually paying back the loans within 30 days and the company charges a 11% to 15% interest on the money it disburses.

The company raised its first capital in 2013 from Lowercase Capital, Google Ventures, and Collaborative Fund. With the new financing, led by Revolution, Siroya now has $50 million in equity to match another $11 million in credit facilities. In all, the company has raised $94 million in equity across three rounds. Steve Murray, a managing partner of Revolution Growth — and former director on the board of business lending startup Kabbage — will be joining Tala’s board of directors with the latest round.

Previous investors, including the growth investment firm IVP, Data Collective, Lowercase Capital, Ribbit Capital, and Female Founders Fund, also participated in Tala’s latest financing.

“We have been fortunate to invest in Twitter and Dropbox and a lot of other companies. but when I think about the companies that we have had the opportunity to back that will have the greatest impact on the world, Tala is certainly one of them,” says IVP general partner, Jules Maltz. “That’s because it has the opportunity to reach the 2 billion people who are unbanked and don’t have access to financial products.”

Those 2 billion include thousands just like Nairobi’s budding new entrepreneurs, Kimeli and Cheruyot.

“I believe in the magic of taking risks and new beginnings,” says Kimeli. “If we hadn’t began on that day, we could have just been desperate now. As in, we might not have a place to eat, maybe. It’s good to take risks, to start something new.”

Powered by WPeMatico

AI game trainer Gosu.ai raises $1.9M to give gamers a virtual assistant

Posted by | AI assistant, analytics, artificial intelligence, ceo, economy, Europe, game design, gamer, Gaming, Google, Gosu.ai, Mobalytics, player, Runa Capital, Silicon Valley, TC, virtual assistant | No Comments

If you play hardcore and competitive games, you want to win, so it would be useful to have someone leaning over your shoulder giving you tips on how to play better. Someone who knows all your moves and behaviors, for instance.

That’s the thinking behind Gosu.ai, which has developed an AI assistant to help gamers play smarter and improve their skills. It’s now raised a $1.9M funding round led by Runa Capital, with participation from Ventech and existing investor, Sistema_VC. Previously, the startup was backed by Gagarin Capital, a new Silicon Valley-based early-stage VC firm focusing on AI investments, which invested in Prisma and MSQRD, which exited to Facebook and Google, respectively.

Gosu.ai provides tools and guidance for users to improve their skills in competitive games. It analyzes their matches and makes personal recommendations. It also helps players prep, suggesting gear sets, starting items and offering ideas on how to take on a particular opponent. The platform currently works with Dota 2, with plans to support CS:GO and PUBG in the near future.

The company was founded by Alisa Chumachenko (pictured), who was the creator and former CEO of Game Insight, a big gaming world player. She says: “There are 2 billion gamers in the world now and 600 million of them play hardcore games, such as MOBAs, Shooters and MMOs. We can help those players reach their full potential with our AI assistants.”

Gosu.ai’s main competitors are Mobalytics, Dojomadness and Moremmr. But the main difference is that these competitors make analytics of raw statistics, and find the generalized weak spots in comparison with other players, giving general recommendations. Gosu.ai analyzes the specific actions of each player, down to the movement of their mouse, to cater direct recommendations for the player. So it’s more like a virtual assistant than a training platform.

In addition, Gosu works in the B2B field, as well, by offering gaming companies a variety of AI tools, for example a predictive analytics.

Powered by WPeMatico