Emerging-Technologies

AR 1.0 is dead: Here’s what it got wrong

Posted by | app-store, Apple, apple inc, augmented reality, consumer products, Developer, Emerging-Technologies, Extra Crunch, Facebook, Gaming, Google, hardware, Magic Leap, Market Analysis, mixed reality, Mobile, operating systems, smartglasses, Startups, TC, Virtual reality, Wearables | No Comments

The first wave of AR startups offering smart glasses is now over, with a few exceptions.

Google acquired North this week for an undisclosed sum. The Canadian company had raised nearly $200 million, but the release of its Focals 2.0 smart glasses has been cancelled, a bittersweet end for its soft landing.

Many AR startups before North made huge promises and raised huge amounts of capital before flaring out in a similarly dramatic fashion.

The technology was almost there in a lot of cases, but the real issue was that the stakes to beat the major players to market were so high that many entrants pushed out boring, general consumer products. In a race to be everything for everybody, the industry relied on nascent developer platforms to do the dirty work of building their early use cases, which contributed heavily to nonexistent user adoption.

A key error of this batch was thinking that an AR glasses company was hardware-first, when the reality is that the missing value is almost entirely centered on missing first-party software experiences. To succeed, the next generation of consumer AR glasses will have to nail this.

Image Credits: ODG

App ecosystems alone don’t create product-market fit

Powered by WPeMatico

Chinese startup Rokid pitches COVID-19 detection glasses in US

Posted by | ambient intelligence, america, artificial intelligence, california, China, COVID-19, data management, Director, e-commerce, Emerging-Technologies, facial recognition, Gadgets, Hangzhou, Internet of Things, IoT, Johns Hopkins University, Larry Liu, law enforcement, Liang Guan, Megvii, president, Qualcomm, rokid, SenseTime, smartglasses, Startups, surveillance, t1, TC, tech startups, TechCrunch, technology, trump, ubiquitous computing, United States, wearable devices, wearable technology, Weee!, White House, world health organization | No Comments

Thermal imaging wearables used in China to detect COVID-19 symptoms could soon be deployed in the U.S.

Hangzhou based AI startup Rokid is in talks with several companies to sell its T1 glasses in America, according to Rokid’s U.S. Director Liang Guan.

Rokid is among a wave of Chinese companies creating technology to address the coronavirus pandemic, which has dealt a blow to the country’s economy. 

Per info Guan provided, Rokid’s T1 thermal glasses use an infrared sensor to detect the temperatures of up to 200 people within two minutes from as far as three meters. The devices carry a Qualcomm CPU, 12 megapixel camera and offer augmented reality features — for hands free voice controls — to record live photos and videos.

The Chinese startup (with a San Francisco office) plans B2B sales of its wearable devices in the U.S. to assist businesses, hospitals and law enforcement with COVID-19 detection, according to Guan.

Rokid is also offering IoT and software solutions for facial recognition and data management, as part of its T1 packages.

Image Credits: Rokid

The company is working on deals with U.S. hospitals and local municipalities to deliver shipments of the smart glasses, but could not disclose names due to confidentiality agreements.

One commercial venture that could use the thermal imaging wearables is California based e-commerce company Weee!.

The online grocer is evaluating Rokid’s T1 glasses to monitor temperatures of its warehouse employees throughout the day, Weee! founder Larry Liu confirmed to TechCrunch via email.

On procedures, to manage those who exhibit COVID-19 related symptoms —  such as referring them for testing — that’s something for end-users to determine, according to Rokid. “The clients can do the follow-up action, such as giving them a mask or asking to work from home,” Guan said.

The T1 glasses connect via USB and can be set up for IoT capabilities for commercial clients to sync to their own platforms. The product could capture the attention of U.S. regulators, who have become increasingly wary of Chinese tech firms’ handling of American citizen data. Rokid says it doesn’t collect info from the T1 glasses directly.

“Regarding this module…we do not take any data to the cloud. For customers, privacy is very important to them. The data measurement is stored locally,” according to Guan.

Image Credits: Rokid

Founded in 2014 by Eric Wong and Mingming Zhu, Rokid raised $100 million at the Series B level in 2018. The business focuses primarily on developing AI and AR tech for applications from manufacturing to gaming, but developed the T1 glasses in response to China’s COVID-19 outbreak.

The goal was to provide businesses and authorities a thermal imaging detection tool that is wearable, compact, mobile and more effective than the common options.

Large scanning stations, such as those used in airports, have drawbacks in not being easily portable and handheld devices — with infrared thermometers — pose risks.

“You have to point them to people’s foreheads…you need to be really close, it’s not wearable and you’re not practicing social distancing to use those,” Guang said.

Rokid pivoted to create the T1 glasses shortly after COVID-19 broke out in China in late 2019. Other Chinese tech startups that have joined the virus-fighting mission include face recognition giant SenseTime — which has installed thermal imaging systems at railway stations across China — and its close rival Megvii, which has set up similar thermal solutions in supermarkets.

On Rokid’s motivations, “At the time we thought something like this can really help the frontline people still working,” Guang said.

The startup’s engineering team developed the T1 product in just under two months. In China, Rokid’s smart glasses have been used by national parks staff, in schools and by national authorities to screen for COVID-19 symptoms.

Temperature detectors have their limitation, however, as research has shown that more than half of China’s COVID-19 patients did not have a fever when admitted to hospital.

Source: Johns Hopkins University of Medicine Coronavirus Research Center

The growth rate of China’s coronavirus cases — which peaked to 83,306 and led to 3,345 deaths — has declined and parts of the country have begun to reopen from lockdown. There is still debate, however, about the veracity of data coming out of China on COVID-19. That led to a row between the White House and World Health Organization, which ultimately saw President Trump halt U.S. contributions to the global body this week.

As COVID-19 cases and related deaths continue to rise in the U.S., technological innovation will become central to the health response and finding some new normal for personal mobility and economic activity. That will certainly bring fresh facets to the common tech conundrums — namely measuring efficacy and balancing benefits with personal privacy.

For its part, Rokid already has new features for its T1 thermal smart glasses in the works. The Chinese startup plans to upgrade the device to take multiple temperature readings simultaneously for up to four people at a time.

“That’s not on the market yet, but we will release this very soon as an update,” said Rokid’s U.S. Director Liang Guan .

Powered by WPeMatico

WorldGaze uses smartphone cameras to help voice AIs cut to the chase

Posted by | apple inc, artificial intelligence, Assistant, augmented reality, carnegie mellon university, Chris Harrison, Computer Vision, Emerging-Technologies, iPhone, machine learning, Magic Leap, Mobile, siri, smartphone, smartphones, virtual assistant, voice AI, Wearables, WorldGaze | No Comments

If you find voice assistants frustratingly dumb, you’re hardly alone. The much-hyped promise of AI-driven vocal convenience very quickly falls through the cracks of robotic pedantry.

A smart AI that has to come back again (and sometimes again) to ask for extra input to execute your request can seem especially dumb — when, for example, it doesn’t get that the most likely repair shop you’re asking about is not any one of them but the one you’re parked outside of right now.

Researchers at the Human-Computer Interaction Institute at Carnegie Mellon University, working with Gierad Laput, a machine learning engineer at Apple, have devised a demo software add-on for voice assistants that lets smartphone users boost the savvy of an on-device AI by giving it a helping hand — or rather a helping head.

The prototype system makes simultaneous use of a smartphone’s front and rear cameras to be able to locate the user’s head in physical space, and more specifically within the immediate surroundings — which are parsed to identify objects in the vicinity using computer vision technology.

The user is then able to use their head as a pointer to direct their gaze at whatever they’re talking about — i.e. “that garage” — wordlessly filling in contextual gaps in the AI’s understanding in a way the researchers contend is more natural.

So, instead of needing to talk like a robot in order to tap the utility of a voice AI, you can sound a bit more, well, human. Asking stuff like “‘Siri, when does that Starbucks close?” Or — in a retail setting — “are there other color options for that sofa?” Or asking for an instant price comparison between “this chair and that one.” Or for a lamp to be added to your wish-list.

In a home/office scenario, the system could also let the user remotely control a variety of devices within their field of vision — without needing to be hyper-specific about it. Instead they could just look toward the smart TV or thermostat and speak the required volume/temperature adjustment.

The team has put together a demo video (below) showing the prototype — which they’ve called WorldGaze — in action. “We use the iPhone’s front-facing camera to track the head in 3D, including its direction vector. Because the geometry of the front and back cameras are known, we can raycast the head vector into the world as seen by the rear-facing camera,” they explain in the video.

“This allows the user to intuitively define an object or region of interest using the head gaze. Voice assistants can then use this contextual information to make enquiries that are more precise and natural.”

In a research paper presenting the prototype they also suggest it could be used to “help to socialize mobile AR experiences, currently typified by people walking down the street looking down at their devices.”

Asked to expand on this, CMU researcher Chris Harrison told TechCrunch: “People are always walking and looking down at their phones, which isn’t very social. They aren’t engaging with other people, or even looking at the beautiful world around them. With something like WorldGaze, people can look out into the world, but still ask questions to their smartphone. If I’m walking down the street, I can inquire and listen about restaurant reviews or add things to my shopping list without having to look down at my phone. But the phone still has all the smarts. I don’t have to buy something extra or special.”

In the paper they note there is a long body of research related to tracking users’ gaze for interactive purposes — but a key aim of their work here was to develop “a functional, real-time prototype, constraining ourselves to hardware found on commodity smartphones.” (Although the rear camera’s field of view is one potential limitation they discuss, including suggesting a partial workaround for any hardware that falls short.)

“Although WorldGaze could be launched as a standalone application, we believe it is more likely for WorldGaze to be integrated as a background service that wakes upon a voice assistant trigger (e.g., ‘Hey Siri’),” they also write. “Although opening both cameras and performing computer vision processing is energy consumptive, the duty cycle would be so low as to not significantly impact battery life of today’s smartphones. It may even be that only a single frame is needed from both cameras, after which they can turn back off (WorldGaze startup time is 7 sec). Using bench equipment, we estimated power consumption at ~0.1 mWh per inquiry.”

Of course there’s still something a bit awkward about a human holding a screen up in front of their face and talking to it — but Harrison confirms the software could work just as easily hands-free on a pair of smart spectacles.

“Both are possible,” he told us. “We choose to focus on smartphones simply because everyone has one (and WorldGaze could literally be a software update), while almost no one has AR glasses (yet). But the premise of using where you are looking to supercharge voice assistants applies to both.”

“Increasingly, AR glasses include sensors to track gaze location (e.g., Magic Leap, which uses it for focusing reasons), so in that case, one only needs outwards facing cameras,” he added.

Taking a further leap it’s possible to imagine such a system being combined with facial recognition technology — to allow a smart spec-wearer to quietly tip their head and ask “who’s that?” — assuming the necessary facial data was legally available in the AI’s memory banks.

Features such as “add to contacts” or “when did we last meet” could then be unlocked to augment a networking or socializing experience. Although, at this point, the privacy implications of unleashing such a system into the real world look rather more challenging than stitching together the engineering. (See, for example, Apple banning Clearview AI’s app for violating its rules.)

“There would have to be a level of security and permissions to go along with this, and it’s not something we are contemplating right now, but it’s an interesting (and potentially scary idea),” agrees Harrison when we ask about such a possibility.

The team was due to present the research at ACM CHI — but the conference was canceled due to the coronavirus.

Powered by WPeMatico

R&D Roundup: Ultrasound/AI medical imaging, assistive exoskeletons and neural weather modeling

Posted by | artificial intelligence, deep learning, Emerging-Technologies, Gadgets, hardware, ozone, retina, satellite imagery, science, stanford, Startups, TC, ultrasound | No Comments

In the time of COVID-19, much of what transpires from the science world to the general public relates to the virus, and understandably so. But other domains, even within medical research, are still active — and as usual, there are tons of interesting (and heartening) stories out there that shouldn’t be lost in the furious activity of coronavirus coverage. This last week brought good news for several medical conditions as well as some innovations that could improve weather reporting and maybe save a few lives in Cambodia.

Ultrasound and AI promise better diagnosis of arrhythmia

Arrhythmia is a relatively common condition in which the heart beats at an abnormal rate, causing a variety of effects, including, potentially, death. Detecting it is done using an electrocardiogram, and while the technique is sound and widely used, it has its limitations: first, it relies heavily on an expert interpreting the signal, and second, even an expert’s diagnosis doesn’t give a good idea of what the issue looks like in that particular heart. Knowing exactly where the flaw is makes treatment much easier.

Ultrasound is used for internal imaging in lots of ways, but two recent studies establish it as perhaps the next major step in arrhythmia treatment. Researchers at Columbia University used a form of ultrasound monitoring called Electromechanical Wave Imaging to create 3D animations of the patient’s heart as it beat, which helped specialists predict 96% of arrhythmia locations compared with 71% when using the ECG. The two could be used together to provide a more accurate picture of the heart’s condition before undergoing treatment.

Another approach from Stanford applies deep learning techniques to ultrasound imagery and shows that an AI agent can recognize the parts of the heart and record the efficiency with which it is moving blood with accuracy comparable to experts. As with other medical imagery AIs, this isn’t about replacing a doctor but augmenting them; an automated system can help triage and prioritize effectively, suggest things the doctor might have missed or provide an impartial concurrence with their opinion. The code and data set of EchoNet are available for download and inspection.

Powered by WPeMatico

NYU makes face shield design for healthcare workers that can be built in under a minute available to all

Posted by | 3d printing, contents, coronavirus, COVID-19, Emerging-Technologies, Gadgets, Health, industrial design, laser, mask, New York University, TC | No Comments

New York University is among the many academic, private and public institutions doing what it can to address the need for personal protective equipment (PPE) among healthcare workers across the world. The school worked quickly to develop an open-source face-shield design, and is now offering that design freely to any and all in order to help scale manufacturing to meet needs.

Face shields are a key piece of equipment for front-line healthcare workers operating in close contact with COVID-19 patients. They’re essentially plastic, transparent masks that extend fully to cover a wearer’s face. These are to be used in tandem with N95 and surgical masks, and can protect a healthcare professional from exposure to droplets containing the virus expelled by patients when they cough or sneeze.

The NYU project is one of many attempts to scale production of face masks, but many others rely on 3D printing. This has the advantage of allowing even very small commercial 3D-print operations and individuals to contribute, but 3D printing takes a lot of time — roughly 30 minutes to an hour per print. NYU’s design requires only basic materials, including two pieces of clear, flexible plastic and an elastic band, and it can be manufactured in less than a minute by essentially any production facility that includes equipment for producing flat products (whole punches, laser cutters, etc.).

This was designed in collaboration with clinicians, and over 100 of them have already been distributed to emergency rooms. NYU’s team plans to ramp production of up to 300,000 of these once they have materials in hand at the factories of production partners they’re working with, which include Daedalus Design and Production, PRG Scenic Technologies and Showman Fabricators.

Now, the team is putting the design out there for pubic use, including a downloadable tool kit so that other organizations can hopefully replicate what they’ve done and get more into circulation. They’re also welcoming inbound contact from manufacturers who can help scale additional production capacity.

Other initiatives are working on different aspects of the PPE shortage, including efforts to build ventilators and extend their use to as many patients as possible. It’s a great example of what’s possible when smart people and organizations collaborate and make their efforts available to the community, and there are bound to be plenty more examples like this as the COVID-19 crisis deepens.

Powered by WPeMatico

Prisma Health develops FDA-authorized 3D-printed device that lets a single ventilator treat four patients

Posted by | 3d printing, coronavirus, COVID-19, Emerging-Technologies, fda, Food and Drug Administration, Gadgets, Health, industrial design, Prisma Health, Software Engineer, South Carolina, TC, United States | No Comments

The impending shortage of ventilators for U.S. hospitals is likely already a crisis, but will become even more dire as the number grows of patients with COVID-19 that are suffering from severe symptoms and require hospitalization. That’s why a simple piece of hardware newly approved by the FDA for emergency use — and available free via source code and 3D printing for hospitals — might be a key ingredient in helping minimize the strain on front-line response efforts.

The Prisma Health VESper is a deceptively simple-looking three-way connector that expands use of one ventilator to treat up to four patients simultaneously. The device is made for use with ventilators that comply to existing ISO standard ventilator hardware and tubing, and allows use of filtering equipment to block any possible transmission of viruses and bacteria.

VESper works in device pairs, with one attached to the intake of the ventilator, and another attached to the return. They also can be stacked to allow for treatment of up to four patients at once — provided the patients require the same clinical treatment in terms of oxygenation, including the oxygen mix as well as the air pressure and other factors.

This was devised by Dr. Sarah Farris, an emergency room doctor, who shared the concept with her husband Ryan Farris, a software engineer who developed the initial prototype design for 3D printing. Prisma Health is making the VESper available upon request via its printing specifications, but it should be noted that the emergency use authorization under which the FDA approved its use means that this is only intended effectively as a last-resort measure — for institutions where ventilators approved under established FDA rules have already been exhausted, and no other supply or alternative is available in order to preserve the life of patients.

Devices cleared under FDA Emergency Use Authorization (EUA) like this one are fully understood to be prototypes, and the conditions of their use includes a duty to report the results of how they perform in practice. This data contributes to the ongoing investigation of their effectiveness, and to further development and refinement of their design in order to maximize their safety and efficacy.

In addition to offering the plans for in-house 3D printing, Prisma Health has sourced donations to help print units for healthcare facilities that don’t have access to their own 3D printers. The first batch of these will be funded by a donation from the Sargent Foundation of South Carolina, but Prisma Health is seeking additional donations to fund continued research as well as additional production.

Powered by WPeMatico

Whatever happened to the Next Big Things?

Posted by | Amazon, Android, Apple, articles, artificial intelligence, blockchain, chatbot, computing, Elon Musk, Emerging-Technologies, Ford, Internet of Things, machine learning, Magic Leap, Microsoft, Opinion, phoenix, Prime Air, self-driving car, smartphone, smartphones, Symbian, TC, technology, waymo | No Comments

In tech, this was the smartphone decade. In 2009, Symbian was still the dominant ‘smartphone’ OS, but 2010 saw the launch of the iPhone 4, the Samsung Galaxy S, and the Nexus One, and today Android and iOS boast four billion combined active devices. Today, smartphones and their apps are a mature market, not a disruptive new platform. So what’s next?

The question presupposes that something has to be next, that this is a law of nature. It’s easy to see why it might seem that way. Over the last thirty-plus years we’ve lived through three massive, overlapping, world-changing technology platform shifts: computers, the Internet, and smartphones. It seems inevitable that a fourth must be on the horizon.

There have certainly been no shortage of nominees over the last few years. AR/VR; blockchains; chatbots; the Internet of Things; drones; self-driving cars. (Yes, self-driving cars would be a platform, in that whole new sub-industries would erupt around them.) And yet one can’t help but notice that every single one of those has fallen far short of optimistic predictions. What is going on?

You may recall that the growth of PCs, the Internet, and smartphones did not ever look wobbly or faltering. Here’s a list of Internet users over time: from 16 million in 1995 to 147 million in 1998. Here’s a list of smartphone sales since 2009: Android went from sub-1-million units to over 80 million in just three years. That’s what a major platform shift looks like.

Let’s compare each of the above, shall we? I don’t think it’s an unfair comparison. Each has had champions arguing it will, in fact, be That Big, and even people with more measured expectations have predicted growth will at least follow the trajectory of smartphones or the Internet, albeit maybe to a lesser peak. But in fact…

AR/VR: Way back in 2015 I spoke to a very well known VC who confidently predicted a floor of 10 million devices per year well before the end of this decade. What did we get? 3.7M to 4.7M to 6M, 2017 through 2019, while Oculus keeps getting reorg’ed. A 27% annual growth rate is OK, sure, but a consistent 27% growth rate is more than a little worrying for an alleged next big thing; it’s a long, long way from “10xing in three years.” Many people also predicted that by the end of this decade Magic Leap would look like something other than an utter shambles. Welp. As for other AR/VR startups, their state is best described as “sorry.”

Blockchains: I mean, Bitcoin’s doing just fine, sure, and is easily the weirdest and most interesting thing to have happened to tech in the 2010s; but the entire rest of the space? I’m broadly a believer in cryptocurrencies, but if you were to have suggested in mid-2017 to a true believer that, by the end of 2019, enterprise blockchains would essentially be dead, decentralized app usage would still be measured in the low thousands, and no real new use cases would have arisen other than collateralized lending for a tiny coterie — I mean, they would have been outraged. And yet, here we are.

Chatbots: No, seriously, chatbots were celebrated as the platform of the future not so long ago. (Alexa, about which more in a bit, is not a chatbot.) “The world is about to be re-written, and bots are going to be a big part of the future” was an actual quote. Facebook M was the future. It no longer exists. Microsoft’s Tay was the future. It really no longer exists. It was replaced by Zo. Did you know that? I didn’t. Zo also no longer exists.

The Internet of Things: let’s look at a few recent headlines, shall we? “Why IoT Has Consistently Fallen Short of Predictions.” “Is IoT Dead?” “IoT: Yesterday’s Predictions vs. Today’s Reality.” Spoiler: that last one does not discuss how reality has blown previous predictions out of the water. Rather, “The reality turned out to be far less rosy.”

Drones: now, a lot of really cool things are happening in the drone space, I’ll be the first to aver. But we’re a long way away from physical packet-switched networks. Amazon teased Prime Air delivery way back in 2015 and made its first drone delivery way back in 2016, which is also when it patented its blimp mother ship. People expected great things. People still expect great things. But I think it’s fair to say they expected … a bit more … by now.

Self-driving cars: We were promised so much more, and I’m not even talking about Elon Musk’s hyperbole. From 2016: “10 million self-driving cars will be on the road by 2020.” “True self-driving cars will arrive in 5 years, says Ford“. We do technically have a few, running in a closed pilot project in Phoenix, courtesy of Waymo, but that’s not what Ford was talking about: “Self-driving Fords that have no steering wheels, brake or gas pedals will be in mass production within five years.” So, 18 months from now, then. 12 months left for that “10 million” prediction. You’ll forgive a certain skepticism on my part.

The above doesn’t mean we haven’t seen any successes, of course. A lot of new kinds of products have been interesting hits: AirPods, the Apple Watch, the Amazon Echo family. All three are more new interfaces than whole new major platforms, though; not so much a gold rush as a single vein of silver.

You may notice I left machine learning / AI off the list. This is in part because it definitely has seen real qualitative leaps, but a) there seems to be a general concern that we may have entered the flattening of an S-curve there, rather than continued hypergrowth, b) either way, it’s not a platform. Moreover, the wall that both drones and self-driving cars have hit is labelled General Purpose Autonomy … in other words, it is an AI wall. AI does many amazing things, but when people predicted 10M self-driving cars on the roads next year, it means they predicted AI would be good enough to drive them. In fact it’s getting there a lot slower than we expected.

Any one of these technologies could define the next decade. But another possibility, which we have to at least consider, is that none of them might. It is not an irrefutable law of nature that just as one major tech platform begins to mature another must inevitably start its rise. We may well see a lengthy gap before the next Next Big Thing. Then we may see two or three rise simultaneously. But if your avowed plan is that this time you’re totally going to get in on the ground floor — well, I’m here to warn you, you may have a long wait in store.

Powered by WPeMatico

Ghost wants to retrofit your car so it can drive itself on highways in 2020

Posted by | Android, Argo AI, Automation, automotive, autonomous car, AV, california, controller, Emerging-Technologies, founders fund, Ghost Locomotion, gps, IBM, Keith Rabois, Khosla Ventures, Lyft, machine learning, Mike Speiser, National Highway Traffic Safety Administration, Pure Storage, robotics, self-driving cars, sutter hill ventures, TC, technology, Tesla, transport, Transportation, Uber, unmanned ground vehicles, waymo, zoox | No Comments

A new autonomous vehicle company is on the streets — and unbeknownst to most, has been since 2017. Unlike the majority in this burgeoning industry, this new entrant isn’t trying to launch a robotaxi service or sell a self-driving system to suppliers and automakers. It’s not aiming for autonomous delivery, either.

Ghost Locomotion, which emerged Thursday from stealth with $63.7 million in investment from Keith Rabois at Founders Fund, Vinod Khosla at Khosla Ventures and Mike Speiser at Sutter Hill Ventures, is targeting your vehicle.

Ghost is developing a kit that will allow privately owned passenger vehicles to drive autonomously on highways. And the company says it will deliver in 2020. A price has not been set, but the company says it will be less than what Tesla charges for its Autopilot package that includes “full self-driving” or FSD. FSD currently costs $7,000.

This kit isn’t going to give a vehicle a superior advanced driving assistance system. The kit will let human drivers hand control of their vehicle over to a computer, allowing them to do other activities such as look at their phone or even doze off.

The idea might sound similar to what Comma.ai is working on, Tesla hopes to achieve or even the early business model of Cruise. Ghost CEO and co-founder John Hayes says what they’re doing is different.

A different approach

The biggest players in the industry — companies like Waymo, Cruise, Zoox and Argo AI — are trying to solve a really hard problem, which is driving in urban areas, Hayes told TechCrunch in a recent interview.

“It didn’t seem like anyone was actually trying to solve driving on the highways,” said Hayes, who previously founded Pure Storage in 2009. “At the time, we were told that this is so easy that surely the automakers will solve this any day now. And that really hasn’t happened.”

Hayes noted that automakers have continued to make progress in advanced driver assistance systems. The more advanced versions of these systems provide what the SAE describes as Level 2 automation, which means two primary control functions are automated. Tesla’s Autopilot system is a good example of this; when engaged, it automatically steers and has traffic-aware cruise control, which maintains the car’s speed in relation to surrounding traffic. But like all Level 2 systems, the driver is still in the loop.

Ghost wants to take the human out of the loop when they’re driving on highways.

“We’re taking, in some ways, a classic startup attitude to this, which is ‘what is the simplest product that we can perfect, that will put self driving in the hands of ordinary consumers?’ ” Hayes said. “And so we take people’s existing cars and we make them self-driving cars.”

The kit

Ghost is tackling that challenge with software and hardware.

The kit involves hardware like sensors and a computer that is installed in the trunk and connected to the controller area network (CAN) of the vehicle. The CAN bus is essentially the nervous system of the car and allows various parts to communicate with each other.

Vehicles must have a CAN bus and electronic steering to be able to use the kit.

The camera sensors are distributed throughout the vehicle. Cameras are integrated into what looks like a license plate holder at the back of the vehicle, as well as another set that are embedded behind the rearview mirror.

A third device with cameras is attached to the frame around the window of the door (see below).

Initially, this kit will be an aftermarket product; the company is starting with the 20 most popular car brands and will expand from there.

Ghost intends to set up retail spaces where a car owner can see the product and have it installed. But eventually, Hayes said, he believes the kit will become part of the vehicle itself, much like GPS or satellite radio has evolved.

While hardware is the most visible piece of Ghost, the company’s 75 employees have dedicated much of their time on the driving algorithm. It’s here, Hayes says, where Ghost stands apart.

How Ghost is building a driver

Ghost is not testing its self-driving system on public roads, an approach nearly every other AV company has taken. There are 63 companies in California that have received permits from the Department of Motor Vehicles to test autonomous vehicle technology (always with a human safety driver behind the wheel) on public roads.

Ghost’s entire approach is based on an axiom that the human driver is fundamentally correct. It begins by collecting mass amounts of video data from kits that are installed on the cars of high-mileage drivers. Ghost then uses models to figure out what’s going on in the scene and combines that with other data, including how the person is driving by measuring the actions they take.

It doesn’t take long or much data to model ordinary driving, actions like staying in a lane, braking and changing lanes on a highway. But that doesn’t “solve” self-driving on highways because the hard part is how to build a driver that can handle the odd occurrences, such as swerving, or correct for those bad behaviors.

Ghost’s system uses machine learning to find more interesting scenarios in the reams of data it collects and builds training models based on them.

The company’s kits are already installed on the cars of high-mileage drivers like Uber and Lyft drivers and commuters. Ghost has recruited dozens of drivers and plans to have its kits in hundreds of cars by the end of the year. By next year, Hayes says the kits will be in thousands of cars, all for the purpose of collecting data.

The background of the executive team, including co-founder and CTO Volkmar Uhlig, as well as the rest of their employees, provides some hints as to how they’re approaching the software and its integration with hardware.

Employees are data scientists and engineers, not roboticists. A dive into their resumes on LinkedIn and not one comes from another autonomous vehicle company, which is unusual in this era of talent poaching.

For instance, Uhlig, who started his career at IBM Watson Research, co-founded Adello and was the architect behind the company’s programmatic media trading platform. Before that, he built Teza Technologies, a high-frequency trading platform. While earning his PhD in computer science he was part of a team that architected the L4 Pistachio microkernel, which is commercially deployed in more than 3 billion mobile Apple and Android devices.

If Ghost is able to validate its system — which Hayes says is baked into its entire approach — privately owned self-driving cars could be on the highways by next year. While the National Highway Traffic Safety Administration could potentially step in, Ghost’s approach, like Tesla, hits a sweet spot of non-regulation. It’s a space, that Hayes notes, where the government has not yet chosen to regulate.

Powered by WPeMatico

Climate activists plan to use drones to shut down Heathrow Airport next month

Posted by | climate change, drones, Emerging-Technologies, Europe, Gadgets, Gatwick Airport, GreenTech, Heathrow, quadcopter, robotics, TC, United Kingdom, unmanned aerial vehicles | No Comments

A UK group of climate activists is planning to fly drones close to Heathrow Airport next month in a direct action they hope will shut down the country’s largest airport for days or even longer.

The planned action is in protest at the government’s decision to green-light a third runway at Heathrow.

They plan to use small, lightweight “toy” drones, flown at head high (6ft) within a 5km drone ‘no fly’ zone around the airport — but not within flight paths. The illegal drone flights will also be made in the early morning at a time when there would not be any scheduled flights in the air space to avoid any risk of posing a threat to aircraft.

The activists point out that the government recently declared a climate emergency — when it also pledged to reduce carbon emissions to net zero by 2050 — arguing there is no chance of meeting that target if the UK expands current airport capacity.

A press spokesman for the group, which is calling itself Heathrow Pause, told TechCrunch: “Over a thousand child are dying as a result of climate change and ecological collapse — already, every single day. That figure is set to significantly worsen. The government has committed to not just reducing carbon emissions but reducing them to net zero — that is clearly empirically impossible if they build another runway.”

The type of drones they plan to use for the protest are budget models which they say can be bought cheaply at UK retailer Argos — which, for example, sells the Sky Viper Stunt Drone for £30; the Revell GO! Stunt Quadcopter Drone for £40; and the Revell Spot 2.0 Quadcopter (which comes with a HD camera) for £50.

The aim for the protest is to exploit what the group dubs a loophole in Heathrow’s health and safety protocol around nearby drone flights to force it to close down runways and ground flights.

Late last year a spate of drone sightings near the UK’s second busiest airport, Gatwick, led to massive disruption for travellers just before Christmas after the airport responded by grounding flights.

At the time, the government was sharply criticized for having failed to foresee weaknesses in the regulatory framework around drone flights near sensitive sites like airports.

In the following months it responded by beefing up what was then a 1km airport exclusion zone to 5km — with that expanded ‘no fly’ zone coming into force in March. However a wider government plan to table a comprehensive drones bill has faced a number of delays.

It’s the larger 5km ‘no fly’ zone that the Heathrow Pause activists are targeting in a way they hope will safely trigger the airport’s health & safety protocol and shut down the airspace and business as usual.

Whether the strategy to use drones as a protest tool to force the closure of the UK’s largest airport will fly remains to be seen.

A spokeswoman for Heathrow airport told us it’s confident it has “robust plans” in place to ensure the group’s protest does not result in any disruption to flights. However she would not provide any details on the steps it will take to avoid having to close runways and ground flights, per its safety protocol.

When we put the airport’s claim of zero disruption from intended action back to Heathrow Pause, its spokesman told us: “Our understanding is that the airport’s own health and safety protocols dictate that they have to ground airplanes if there are any drones of any size flying at any height anywhere within 5km of the airport.

“Our position would be that it’s entirely up to them what they do. That the action that we’re taking does not pose a threat to anybody and that’s very deliberately the case. Having said that I’d be surprised to hear that they’re going to disregard their own protocols even if those are — in our view — excessive. It would still come as a surprise if they weren’t going to follow them.”

“We won’t be grounding any flights in any circumstances,” he added. “It’s not within our power to do so. All of the actions that have been planned have been meticulously planned so as not to pose any threat to anybody. We don’t actually see that there need to be flights grounded either. Having said that clearly it would be great if Heathrow decided to ground flights. Every flight that’s grounded is that much less greenhouse gas pumped into the atmosphere. And it directly saves lives.

“The fewer flights there are the better. But if there are no flights cancelled we’d still consider the action to be an enormous success — purely upon the basis of people being arrested.”

The current plan for the protest is to start illegally flying drones near Heathrow on September 13 — and continue for what the spokesman said could be as long as “weeks”, depending on how many volunteer pilots it can sign up. He says they “anticipate” having between 50 to 200 people willing to risk arrest by breaching drone flight law.

The intention is to keep flying drones for as long as people are willing to join the protest. “We are hoping to go for over a week,” he told us.

Given the plan has been directly communicated to police the spokesman conceded there is a possibility that the activists could face arrest before they are able to carry out the protest — which he suggested might be what Heathrow is banking on.

Anyone who flies a drone in an airport’s ‘no fly’ zone is certainly risking arrest and prosecution under UK law. Penalties for the offence range from fines to life imprisonment if a drone is intentionally used to cause violence. But the group is clearly taking pains to avoid accusations the protest poses a safety risk or threatens violence — including by publishing extensive details of their plan online, as well as communicating it to police and airport authorities.

A detailed protocol on their website sets out the various safety measures and conditions the activists are attaching to the drone action — “to ensure no living being is harmed”. Such as only using drones lighter than 7kg, and giving the airport an hour’s advance notice ahead of each drone flight.

They also say they have a protocol to shut down the protest in the event of an emergency — and will have a dedicated line of communication open to Heathrow for this purposes.

Some of the activists are scheduled to meet with police and airport authorities  tomorrow, face to face, at a London police station to discuss the planned action.

The group says it will only call off the action if the Heathrow third runway expansion is cancelled.

In an emailed statement in response to the protest, Heathrow Airport told us:

We agree with the need to act on climate change. This is a global issue that requires constructive engagement and action. Committing criminal offences and disrupting passengers is counterproductive.

Flying of any form of drone near Heathrow is illegal and any persons found doing so will be subject to the full force of the law. We are working closely with the Met Police and will use our own drone detection capability to mitigate the operational impact of any illegal use of drones near the airport.

Asked why the environmental activists have selected drones as their tool of choice for this protest, rather than deploying more traditional peaceful direct action strategies, such as trespassing on airport grounds or chaining themselves to fixed infrastructure, the Heathrow Pause spokesman told us: “Those kind of actions have been done in the past and they tend to result in very short duration of time during which very few flights are cancelled. What we are seeking to do is unprecedented in terms of the duration and the extent of the disruption that we would hope to cause.

“The reason for drones is in order to exploit this loophole in the health and safety protocols that have been presented to us — that it’s possible for a person with a toy drone that you can purchase for a couple of quid, miles away from any planes, to cause an entire airport to stop having flights. It is quite an amazing situation — and once it became apparent that that was really a possibility it almost seemed criminal not to do it.”

He added that drone technology, and the current law in the UK around how drones can be legally used, present an opportunity for activists to level up their environmental protest — “to cause so much disruption with so few people and so little effort” — that it’s simply “a no brainer”.

During last year’s Gatwick drone debacle the spokesman said he received many enquiries from journalists asking if the group was responsible for that. They weren’t — but the mass chaos caused by the spectre of a few drones being flown near Gatwick provided inspiration for using drone technology for an environmental protest.

The group’s website is hosting video interviews with some of the volunteer drone pilots who are willing to risk arrest to protest against the expansion of Heathrow Airport on environmental grounds.

In a statement there, one of them, a 64-year-old writer called Valerie Milner-Brown, said: “We are in the middle of a climate and ecological emergency. I am a law-abiding citizen — a mother and a grandmother too. I don’t want to break the law, I don’t want to go to prison, but right now we, as a species, are walking off the edge of a cliff. Life on Earth is dying. Fires are ravaging the Amazon. Our planet’s lungs are quite literally on fire. Hundreds of species are going extinct every day. We are experiencing hottest day after hottest day, and the Arctic is melting faster than scientists’ worst predictions.

“All of this means that we have to cut emissions right now, or face widespread catastrophe on an increasingly uninhabitable planet. Heathrow Airport emits 18 million tons of CO2 a year. That’s more than most countries. A third runway will produce a further 7.3 million tons of CO2. For all Life — now and in the future — we have to take action. I’m terrified but if this is what it will take to make politicians, business leaders and the media wake up, then I’m prepared to take this action and to face the consequences.”

Powered by WPeMatico

Waymo has now driven 10 billion autonomous miles in simulation

Posted by | automotive, california, Companies, CTO, Dmitri Dolgov, electric vehicles, Emerging-Technologies, Google, Mobile, san francisco bay area, self-driving cars, simulation, TC, TC Sessions: Mobility 2019, waymo, X | No Comments

Alphabet’s Waymo autonomous driving company announced a new milestone at TechCrunch Sessions: Mobility on Wednesday: 10 billion miles driving in simulation. This is a significant achievement for the company, because all those simulated miles on the road for its self-driving software add up to considerable training experience.

Waymo also probably has the most experience when it comes to actual, physical road miles driven — the company is always quick to point out that it’s been doing this far longer than just about anyone else working in autonomous driving, thanks to its head start as Google’s self-driving car moonshot project.

“At Waymo, we’ve driven more than 10 million miles in the real world, and over 10 billion miles in simulation,” Waymo CTO Dmitri Dolgov told TechCrunch’s Kirsten Korosec on the Sessions: Mobility stage. “And the amount of driving you do in both of those is really a function of the maturity of your system, and the capability of your system. If you’re just getting started, it doesn’t matter – you’re working on the basics, you can drive a few miles or a few thousand or tens of thousands of miles in the real world, and that’s plenty to tell you and give you information that you need to know to improve your system.”

Dolgov’s point is that the more advanced your autonomous driving system becomes, the more miles you actually need to drive to have impact, because you’ve handled the basics and are moving on to edge cases, advanced navigation and ensuring that the software works in any and every scenario it encounters. Plus, your simulation becomes more sophisticated and more accurate as you accumulate real-world driving miles, which means the results of your virtual testing is more reliable for use back in your cars driving on actual roads.

This is what leads Dolgov to the conclusion that Waymo’s simulation is likely better than a lot of comparable simulation training at other autonomous driving companies.

“I think what makes it a good simulator, and what makes it powerful is two things,” Dolgov said onstage. “One [is] fidelity. And by fidelity, I mean, not how good it looks. It’s how well it behaves, and how representative it is of what you will encounter in the real world. And then second is scale.”

In other words, experience isn’t beneficial in terms of volume — it’s about sophistication, maturity and readiness for commercial deployment.

Powered by WPeMatico