machine learning

Daily Crunch: Twitter rolls out audio tweets

Posted by | Android, app-store, Apple, apple inc, artificial intelligence, ceo, Daily Crunch, iTunes, machine learning, operating systems, Rahul Vohra, Social, Software, Stockwell, TechCrunch, text messaging, Twitter, United Kingdom | No Comments

Twitter tries to make audio tweets a thing, the U.K. backtracks on its contact-tracing app and Apple’s App Store revenue share is at the center of a new controversy.

Here’s your Daily Crunch for June 18, 2020.

1. Twitter begins rolling out audio tweets on iOS

Twitter is rolling out audio tweets, which do exactly what you’d expect — allow users to share thoughts in audio form. The feature will only be available to some iOS users for now, though the company says all iOS users should have access “in the coming weeks.” (No word on an Android or web rollout yet.)

This feature potentially allows for much longer thoughts than a 280-character tweet. Individual audio clips will be limited to 140 seconds, but if you exceed the limit, a new tweet will be threaded beneath the original.

2. UK gives up on centralized coronavirus contacts-tracing app — switches to testing model backed by Apple and Google

The U.K.’s move to abandon the centralized approach and adopt a decentralized model is hardly surprising, but the time it’s taken the government to arrive at the obvious conclusion does raise some major questions over its competence at handling technology projects.

3. Apple doubles down on its right to profit from other businesses

Apple this week is getting publicly dragged for digging in its heels over its right to take a cut of subscription-based transactions that flow through its App Store. This is not a new complaint, but one that came to a head this week over Apple’s decision to reject app updates from Basecamp’s newly launched subscription-based email app called Hey.

4. Payfone raises $100M for its mobile phone-based digital verification and ID platform

Payfone has built a platform to identify and verify people using data (but not personal data) gleaned from your mobile phone. CEO Rodger Desai said the plan for the funding is to build more machine learning into the company’s algorithms, expand to 35 more geographies and to make strategic acquisitions to expand its technology stack.

5. Superhuman’s Rahul Vohra says recession is the ‘perfect time’ to be aggressive for well-capitalized startups

We had an extensive conversation with Vohra as part of Extra Crunch Live, also covering why the email app still has more than 275,000 people on its wait list. (Extra Crunch membership required.)

6. Stockwell, the AI-vending machine startup formerly known as Bodega, is shutting down July 1

Founded in 2017 by ex-Googlers, the AI vending machine startup formerly known as Bodega first raised blood pressures — people hated how it was referenced and poorly “disrupted” mom-and-pop shops in one fell swoop — and then raised a lot of money. But ultimately, it was no match for COVID-19 and how it reshaped our lifestyles.

7. Apply for the Startup Battlefield

With TechCrunch Disrupt going virtual, this is your chance to get featured in front of our largest audience ever. The post says you’ve only got 72 hours left, but the clock has been ticking since then — the deadline is 11:59pm Pacific tomorrow, June 19. So get on it!

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

Powered by WPeMatico

TinyML is giving hardware new life

Posted by | arduino, artificial intelligence, artificial neural networks, biotech, Cloud, Column, coronavirus, COVID-19, deep learning, drug development, embedded systems, Extra Crunch, Gadgets, hardware, machine learning, manufacturing, Market Analysis, ML, neural networks, Open source hardware, robotics, SaaS, Wearables | No Comments
Adam Benzion
Contributor

A serial entrepreneur, writer, and tech investor, Adam Benzion is the co-founder of Hackster.io, the world’s largest community for hardware developers.

Aluminum and iconography are no longer enough for a product to get noticed in the marketplace. Today, great products need to be useful and deliver an almost magical experience, something that becomes an extension of life. Tiny Machine Learning (TinyML) is the latest embedded software technology that moves hardware into that almost magical realm, where machines can automatically learn and grow through use, like a primitive human brain.

Until now building machine learning (ML) algorithms for hardware meant complex mathematical modes based on sample data, known as “training data,” in order to make predictions or decisions without being explicitly programmed to do so. And if this sounds complex and expensive to build, it is. On top of that, traditionally ML-related tasks were translated to the cloud, creating latency, consuming scarce power and putting machines at the mercy of connection speeds. Combined, these constraints made computing at the edge slower, more expensive and less predictable.

But thanks to recent advances, companies are turning to TinyML as the latest trend in building product intelligence. Arduino, the company best known for open-source hardware is making TinyML available for millions of developers. Together with Edge Impulse, they are turning the ubiquitous Arduino board into a powerful embedded ML platform, like the Arduino Nano 33 BLE Sense and other 32-bit boards. With this partnership you can run powerful learning models based on artificial neural networks (ANN) reaching and sampling tiny sensors along with low-powered microcontrollers.

Over the past year great strides were made in making deep learning models smaller, faster and runnable on embedded hardware through projects like TensorFlow Lite for Microcontrollers, uTensor and Arm’s CMSIS-NN. But building a quality dataset, extracting the right features, training and deploying these models is still complicated. TinyML was the missing link between edge hardware and device intelligence now coming to fruition.

Tiny devices with not-so-tiny brains

Powered by WPeMatico

Scandit raises $80M as COVID-19 drives demand for contactless deliveries

Posted by | 7-eleven, Alaska airlines, arkansas, barcode, Carrefour, Enterprise, Europe, fedex, Fundings & Exits, G2VP, hardware, healthcare, Instacart, inventory management, latin america, machine learning, Mobile, NGP capital, north america, ocr, Salesforce Ventures, Samuel Mueller, scandit, smartphones, swisscom ventures, Toyota, Wearables, zurich | No Comments

Enterprise barcode scanner company Scandit has closed an $80 million Series C round, led by Silicon Valley VC firm G2VP. Atomico, GV, Kreos, NGP Capital, Salesforce Ventures and Swisscom Ventures also participated in the round — which brings its total raised to date to $123M.

The Zurich-based firm offers a platform that combines computer vision and machine learning tech with barcode scanning, text recognition (OCR), object recognition and augmented reality which is designed for any camera-equipped smart device — from smartphones to drones, wearables (e.g. AR glasses for warehouse workers) and even robots.

Use-cases include mobile apps or websites for mobile shopping; self checkout; inventory management; proof of delivery; asset tracking and maintenance — including in healthcare where its tech can be used to power the scanning of patient IDs, samples, medication and supplies.

It bills its software as “unmatched” in terms of speed and accuracy, as well as the ability to scan in bad light; at any angle; and with damaged labels. Target industries include retail, healthcare, industrial/manufacturing, travel, transport & logistics and more.

The latest funding injection follows a $30M Series B round back in 2018. Since then Scandit says it’s tripled recurring revenues, more than doubling the number of blue-chip enterprise customers, and doubling the size of its global team.

Global customers for its tech include the likes of 7-Eleven, Alaska Airlines, Carrefour, DPD, FedEx, Instacart, Johns Hopkins Hospital, La Poste, Levi Strauss & Co, Mount Sinai Hospital and Toyota — with the company touting “tens of billions of scans” per year on 100+ million active devices at this stage of its business.

It says the new funding will go on further pressing on the gas to grow in new markets, including APAC and Latin America, as well as building out its footprint and ops in North America and Europe. Also on the slate: Funding more R&D to devise new ways for enterprises to transform their core business processes using computer vision and AR.

The need for social distancing during the coronavirus pandemic has also accelerated demand for mobile computer vision on personal smart devices, according to Scandit, which says customers are looking for ways to enable more contactless interactions.

Another demand spike it’s seeing is coming from the pandemic-related boom in ‘Click & Collect’ retail and “millions” of extra home deliveries — something its tech is well positioned to cater to because its scanning apps support BYOD (bring your own device), rather than requiring proprietary hardware.

“COVID-19 has shone a spotlight on the need for rapid digital transformation in these uncertain times, and the need to blend the physical and digital plays a crucial role,” said CEO Samuel Mueller in a statement. “Our new funding makes it possible for us to help even more enterprises to quickly adapt to the new demand for ‘contactless business’, and be better positioned to succeed, whatever the new normal is.”

Also commenting on the funding in a supporting statement, Ben Kortlang, general partner at G2VP, added: “Scandit’s platform puts an enterprise-grade scanning solution in the pocket of every employee and customer without requiring legacy hardware. This bridge between the physical and digital worlds will be increasingly critical as the world accelerates its shift to online purchasing and delivery, distributed supply chains and cashierless retail.”

Powered by WPeMatico

7 VCs talk about today’s esports opportunities

Posted by | Advertising Tech, artificial intelligence, coronavirus, COVID-19, Entertainment, esports, Extra Crunch, Gaming, Investor Surveys, machine learning, Startups, TC, Venture Capital | No Comments

Even before the COVID-19 shutdown, venture funding rounds and total deal volume of VC funding for esports were down noticeably from the year prior. The space received a lot of attention in 2017 and 2018 as leagues formed, teams raised money and surging popularity fostered a whole ecosystem of new companies. Last year featured some big fundraises, but esports wasn’t the hot new thing in the tech world anymore.

This unexpected, compulsory work-from-home era may drive renewed interest in the space, however, as a larger market of consumers discover esports and more potential entrepreneurs identify pain points in their experience.

To track where new startups could arise this year, I asked seven VCs who pay close attention to the esports market where they see opportunities at the moment:

Their responses are below.

This is the second investor survey I’ve conducted to better understand VCs’ views on gaming startups amid the pandemic; they complement my broader gaming survey from October 2019 and an eight-article series on virtual worlds I wrote last month. If you missed it, read the previous survey, which investigated the trend of “games as the new social networks”.

Peter Levin, Griffin Gaming Partners

Which specific areas within esports are most interesting to you right now as a VC looking for deals? Which areas are the least interesting territory for new deals?

Everything around competitive gaming is of interest to us. With Twitch streaming north of two BILLION hours of game play thus far during the pandemic, this continues to be an area of great interest to us. Fantasy, real-time wagering, match-making, backend infrastructure and other areas of ‘picks and shovels’-like plays remain front burner for us relative to competitive gaming.

What challenges does the esports ecosystem now need solutions to that didn’t exist (or weren’t a focus) 2 years ago?

As competitive gaming is still so very new with respect to the greater competitive landscape of content, teams and events, the Industry should be nimble enough to better respond to dramatic market shifts relative to its analog, linear brethren. A native digital industry, getting back “online’”will be orders of magnitude more straightforward than in so many other areas.

Powered by WPeMatico

R&D Roundup: Sweat power, Earth imaging, testing ‘ghostdrivers’

Posted by | artificial intelligence, autonomous systems, coronavirus, COVID-19, cybernetics, esa, Extra Crunch, Gadgets, Health, imaging, Lidar, machine learning, MIT, National Science Foundation, plastics, satellite imagery, science, self-driving car, Space, TC, technology, telemedicine | No Comments

I see far more research articles than I could possibly write up. This column collects the most interesting of those papers and advances, along with notes on why they may prove important in the world of tech and startups.

This week: one step closer to self-powered on-skin electronics; people dressed as car seats; how to make a search engine for 3D data; and a trio of Earth imaging projects that take on three different types of disasters.

Sweat as biofuel

Monitoring vital signs is a crucial part of healthcare and is a big business across fitness, remote medicine and other industries. Unfortunately, powering devices that are low-profile and last a long time requires a bulky battery or frequent charging is a fundamental challenge. Wearables powered by body movement or other bio-derived sources are an area of much research, and this sweat-powered wireless patch is a major advance.

A figure from the paper showing the device and interactions happening inside it.

The device, described in Science Robotics, uses perspiration as both fuel and sampling material; sweat contains chemical signals that can indicate stress, medication uptake, and so on, as well as lactic acid, which can be used in power-generating reactions.

The patch performs this work on a flexible substrate and uses the generated power to transmit its data wirelessly. It’s reliable enough that it was used to control a prosthesis, albeit in limited fashion. The market for devices like this will be enormous and this platform demonstrates a new and interesting direction for researchers to take.

Powered by WPeMatico

Apple and CMU researchers demo a low friction learn-by-listening system for smarter home devices

Posted by | Apple, artificial intelligence, cmu, Gadgets, machine learning, neural network, smart device, smart devices, smart speaker, supervised learning | No Comments

A team of researchers from Apple and Carnegie Mellon University’s Human-Computer Interaction Institute have presented a system for embedded AIs to learn by listening to noises in their environment without the need for up-front training data or without placing a huge burden on the user to supervise the learning process. The overarching goal is for smart devices to more easily build up contextual/situational awareness to increase their utility.

The system, which they’ve called Listen Learner, relies on acoustic activity recognition to enable a smart device, such as a microphone-equipped speaker, to interpret events taking place in its environment via a process of self-supervised learning with manual labelling done by one-shot user interactions — such as by the speaker asking a person ‘what was that sound?’, after it’s heard the noise enough time to classify in into a cluster.

A general pre-trained model can also be looped in to enable the system to make an initial guess on what an acoustic cluster might signify. So the user interaction could be less open-ended, with the system able to pose a question such as ‘was that a faucet?’ — requiring only a yes/no response from the human in the room.

Refinement questions could also be deployed to help the system figure out what the researchers dub “edge cases”, i.e. where sounds have been closely clustered yet might still signify a distinct event — say a door being closed vs a cupboard being closed. Over time, the system might be able to make an educated either/or guess and then present that to the user to confirm.

They’ve put together the below video demoing the concept in a kitchen environment.

In their paper presenting the research they point out that while smart devices are becoming more prevalent in homes and offices they tend to lack “contextual sensing capabilities” — with only “minimal understanding of what is happening around them”, which in turn limits “their potential to enable truly assistive computational experiences”.

And while acoustic activity recognition is not itself new, the researchers wanted to see if they could improve on existing deployments which either require a lot of manual user training to yield high accuracy; or use pre-trained general classifiers to work ‘out of the box’ but — since they lack data for a user’s specific environment — are prone to low accuracy.

Listen Learner is thus intended as a middle ground to increase utility (accuracy) without placing a high burden on the human to structure the data. The end-to-end system automatically generates acoustic event classifiers over time, with the team building a proof-of-concept prototype device to act like a smart speaker and pipe up to ask for human input. 

“The algorithm learns an ensemble model by iteratively clustering unknown samples, and then training classifiers on the resulting cluster assignments,” they explain in the paper. “This allows for a ‘one-shot’ interaction with the user to label portions of the ensemble model when they are activated.”

Audio events are segmented using an adaptive threshold that triggers when the microphone input level is 1.5 standard deviations higher than the mean of the past minute.

“We employ hysteresis techniques (i.e., for debouncing) to further smooth our thresholding scheme,” they add, further noting that: “While many environments have persistent and characteristic background sounds (e.g., HVAC), we ignore them (along with silence) for computational efficiency. Note that incoming samples were discarded if they were too similar to ambient noise, but silence within a segmented window is not removed.”

The CNN (convolutional neural network) audio model they’re using was initially trained on the YouTube-8M dataset  — augmented with a library of professional sound effects, per the paper.

“The choice of using deep neural network embeddings, which can be seen as learned low-dimensional representations of input data, is consistent with the manifold assumption (i.e., that high-dimensional data roughly lie on a low-dimensional manifold). By performing clustering and classification on this low-dimensional learned representation, our system is able to more easily discover and recognize novel sound classes,” they add.

The team used unsupervised clustering methods to infer the location of class boundaries from the low-dimensional learned representations — using a hierarchical agglomerative clustering (HAC) algorithm known as Ward’s method.

Their system evaluates “all possible groupings of data to find the best representation of classes”, given candidate clusters may overlap with one another.

“While our clustering algorithm separates data into clusters by minimizing the total within-cluster variance, we also seek to evaluate clusters based on their classifiability. Following the clustering stage, we use a unsupervised one-class support vector machine (SVM) algorithm that learns decision boundaries for novelty detection. For each candidate cluster, a one-class SVM is trained on a cluster’s data points, and its F1 score is computed with all samples in the data pool,” they add.

“Traditional clustering algorithms seek to describe input data by providing a cluster assignment, but this alone cannot be used to discriminate unseen samples. Thus, to facilitate our system’s inference capability, we construct an ensemble model using the one-class SVMs generated from the previous step. We adopt an iterative procedure for building our ensemble model by selecting the first classifier with an F1 score exceeding the threshold, 𝜃&'( and adding it to the ensemble. When a classifier is added, we run it on the data pool and mark samples that are recognized. We then restart the cluster-classify loop until either 1) all samples in the pool are marked or 2) a loop does not produce any more classifiers.”

Privacy preservation?

The paper touches on privacy concerns that arise from such a listening system — given how often the microphone would be switched on and processing environmental data, and because they note it may not always be possible to carry out all processing locally on the device.

“While our acoustic approach to activity recognition affords benefits such as improved classification accuracy and incremental learning capabilities, the capture and transmission of audio data, especially spoken content, should raise privacy concerns,” they write. “In an ideal implementation, all data would be retained on the sensing device (though significant compute would be required for local training). Alternatively, compute could occur in the cloud with user-anonymized labels of model classes stored locally.”

You can read the full paper here.

Powered by WPeMatico

WorldGaze uses smartphone cameras to help voice AIs cut to the chase

Posted by | apple inc, artificial intelligence, Assistant, augmented reality, carnegie mellon university, Chris Harrison, Computer Vision, Emerging-Technologies, iPhone, machine learning, Magic Leap, Mobile, siri, smartphone, smartphones, virtual assistant, voice AI, Wearables, WorldGaze | No Comments

If you find voice assistants frustratingly dumb, you’re hardly alone. The much-hyped promise of AI-driven vocal convenience very quickly falls through the cracks of robotic pedantry.

A smart AI that has to come back again (and sometimes again) to ask for extra input to execute your request can seem especially dumb — when, for example, it doesn’t get that the most likely repair shop you’re asking about is not any one of them but the one you’re parked outside of right now.

Researchers at the Human-Computer Interaction Institute at Carnegie Mellon University, working with Gierad Laput, a machine learning engineer at Apple, have devised a demo software add-on for voice assistants that lets smartphone users boost the savvy of an on-device AI by giving it a helping hand — or rather a helping head.

The prototype system makes simultaneous use of a smartphone’s front and rear cameras to be able to locate the user’s head in physical space, and more specifically within the immediate surroundings — which are parsed to identify objects in the vicinity using computer vision technology.

The user is then able to use their head as a pointer to direct their gaze at whatever they’re talking about — i.e. “that garage” — wordlessly filling in contextual gaps in the AI’s understanding in a way the researchers contend is more natural.

So, instead of needing to talk like a robot in order to tap the utility of a voice AI, you can sound a bit more, well, human. Asking stuff like “‘Siri, when does that Starbucks close?” Or — in a retail setting — “are there other color options for that sofa?” Or asking for an instant price comparison between “this chair and that one.” Or for a lamp to be added to your wish-list.

In a home/office scenario, the system could also let the user remotely control a variety of devices within their field of vision — without needing to be hyper-specific about it. Instead they could just look toward the smart TV or thermostat and speak the required volume/temperature adjustment.

The team has put together a demo video (below) showing the prototype — which they’ve called WorldGaze — in action. “We use the iPhone’s front-facing camera to track the head in 3D, including its direction vector. Because the geometry of the front and back cameras are known, we can raycast the head vector into the world as seen by the rear-facing camera,” they explain in the video.

“This allows the user to intuitively define an object or region of interest using the head gaze. Voice assistants can then use this contextual information to make enquiries that are more precise and natural.”

In a research paper presenting the prototype they also suggest it could be used to “help to socialize mobile AR experiences, currently typified by people walking down the street looking down at their devices.”

Asked to expand on this, CMU researcher Chris Harrison told TechCrunch: “People are always walking and looking down at their phones, which isn’t very social. They aren’t engaging with other people, or even looking at the beautiful world around them. With something like WorldGaze, people can look out into the world, but still ask questions to their smartphone. If I’m walking down the street, I can inquire and listen about restaurant reviews or add things to my shopping list without having to look down at my phone. But the phone still has all the smarts. I don’t have to buy something extra or special.”

In the paper they note there is a long body of research related to tracking users’ gaze for interactive purposes — but a key aim of their work here was to develop “a functional, real-time prototype, constraining ourselves to hardware found on commodity smartphones.” (Although the rear camera’s field of view is one potential limitation they discuss, including suggesting a partial workaround for any hardware that falls short.)

“Although WorldGaze could be launched as a standalone application, we believe it is more likely for WorldGaze to be integrated as a background service that wakes upon a voice assistant trigger (e.g., ‘Hey Siri’),” they also write. “Although opening both cameras and performing computer vision processing is energy consumptive, the duty cycle would be so low as to not significantly impact battery life of today’s smartphones. It may even be that only a single frame is needed from both cameras, after which they can turn back off (WorldGaze startup time is 7 sec). Using bench equipment, we estimated power consumption at ~0.1 mWh per inquiry.”

Of course there’s still something a bit awkward about a human holding a screen up in front of their face and talking to it — but Harrison confirms the software could work just as easily hands-free on a pair of smart spectacles.

“Both are possible,” he told us. “We choose to focus on smartphones simply because everyone has one (and WorldGaze could literally be a software update), while almost no one has AR glasses (yet). But the premise of using where you are looking to supercharge voice assistants applies to both.”

“Increasingly, AR glasses include sensors to track gaze location (e.g., Magic Leap, which uses it for focusing reasons), so in that case, one only needs outwards facing cameras,” he added.

Taking a further leap it’s possible to imagine such a system being combined with facial recognition technology — to allow a smart spec-wearer to quietly tip their head and ask “who’s that?” — assuming the necessary facial data was legally available in the AI’s memory banks.

Features such as “add to contacts” or “when did we last meet” could then be unlocked to augment a networking or socializing experience. Although, at this point, the privacy implications of unleashing such a system into the real world look rather more challenging than stitching together the engineering. (See, for example, Apple banning Clearview AI’s app for violating its rules.)

“There would have to be a level of security and permissions to go along with this, and it’s not something we are contemplating right now, but it’s an interesting (and potentially scary idea),” agrees Harrison when we ask about such a possibility.

The team was due to present the research at ACM CHI — but the conference was canceled due to the coronavirus.

Powered by WPeMatico

Google said to be preparing its own chips for use in Pixel phones and Chromebooks

Posted by | Apple, Assistant, chrome os, chromebook, computers, computing, Gadgets, Google, hardware, Intel, iPhone, laptops, mac, machine learning, photo processing, PIXEL, Qualcomm, Samsung, smartphone, smartphones, TC | No Comments

Google is reportedly on the verge of stepping up their hardware game in a way that follows the example set by Apple, with custom-designed silicon powering future smartphones. Axios reports that Google is readying its own in-house processors for use in future Pixel devices, including both phones and eventually Chromebooks, too.

Google’s efforts around its own first-party hardware have been somewhat of a mixed success, with some generations of Pixel smartphone earning high praise, including for its work around camera software and photo processing. But it has used standard Qualcomm processors to date, whereas Apple has long designed its own custom processor (the A-series) for its iPhone, providing the Mac-maker an edge when it comes to performance tailor-made for its OS and applications.

The Axios report says that Google’s in-house chip is code-named “Whitechapel,” and that it was made in collaboration with Samsung and uses that company’s 5-nanometer process. It includes an 8-core ARM-based processor, as well as dedicated on-chip resources for machine learning and Google Assistant.

Google has already taken delivery of the first working prototypes of this processor, but it’s said to be at least a year before they’ll be used in actual shipping Pixel phones, which means we likely have at least one more generation of Pixel that will include a third-party processor. The report says that this will eventually make its way to Chromebooks, too, if all goes to plan, but that that will take longer.

Rumors have circulated for years now that Apple would eventually move its own Mac line to in-house, ARM-based processors, especially as the power and performance capabilities of its A-series chips has scaled and surpassed those of its Intel equivalents. ARM-based Chromebooks already exist, so that could make for an easier transition on the Google side – provided the Google chips can live up to expectations.

Powered by WPeMatico

FluSense system tracks sickness trends by autonomously monitoring public spaces

Posted by | artificial intelligence, coronavirus, COVID-19, flu, Gadgets, Health, machine learning, science | No Comments

One of the obstacles to accurately estimating the prevalence of sickness in the general population is that most of our data comes from hospitals, not the 99.9 percent of the world that isn’t hospitals. FluSense is an autonomous, privacy-respecting system that counts the people and coughs in public spaces to keep health authorities informed.

Every year has a flu and cold season, of course, though this year’s is of course far more dire. But it’s like an ordinary flu season in that the main way anyone estimates how many people are sick is by analyzing stats from hospitals and clinics. Patients reporting “influenza-like illness” or certain symptoms get aggregated and tracked centrally. But what about the many folks who just stay home, or go to work sick?

We don’t know what we don’t know here, and that makes estimates of sickness trends — which inform things like vaccine production and hospital staffing — less reliable than they could be. Not only that, but it likely produces biases: Who is less likely to go to a hospital, and more likely to have to work sick? Folks with low incomes and no healthcare.

Researchers at the University of Massachusetts Amherst are attempting to alleviate this data problem with an automated system they call FluSense, which monitors public spaces, counting the people in them and listening for coughing. A few of these strategically placed in a city could give a great deal of valuable data and insight into flu-like illness in the general population.

Tauhidur Rahman and Forsad Al Hossain describe the system in a recent paper published in an ACM journal. FluSense basically consists of a thermal camera, a microphone, and a compact computing system loaded with a machine learning model trained to detect people and the sounds of coughing.

To be clear at the outset, this isn’t recording or recognizing individual faces; Like a camera doing face detection in order to set focus, this system only sees that a face and body exists and uses that to create a count of people in view. The number of coughs detected is compared to the number of people, and a few other metrics like sneezes and amount of speech, to produce a sort of sickness index — think of it as coughs per person per minute.

A sample setup, above, the FluSense prototype hardware, center, and sample output from the thermal camera with individuals being counted and outlined.

Sure, it’s a relatively simple measurement, but there’s nothing like this out there, even in places like clinic waiting rooms where sick people congregate; Admissions staff aren’t keeping a running tally of coughs for daily reporting. One can imagine not only characterizing the types of coughs, but visual markers like how closely packed people are, and location information like sickness indicators in one part of a city versus another.

“We believe that FluSense has the potential to expand the arsenal of health surveillance tools used to forecast seasonal flu and other viral respiratory outbreaks, such as the COVID-19 pandemic or SARS,” Rahman told TechCrunch. “By understanding the ebb and flow of the symptoms dynamics across different locations, we can have a better understanding of the severity of a novel infectious disease and that way we can enforce targeted public health intervention such as social distancing or vaccination.”

Obviously privacy is an important consideration with something like this, and Rahman explained that was partly why they decided to build their own hardware, since as some may have realized already, this is a system that’s possible (though not trivial) to integrate into existing camera systems.

“The researchers canvassed opinions from clinical care staff and the university ethical review committee to ensure the sensor platform was acceptable and well-aligned with patient protection considerations,” he said. “All persons discussed major hesitations about collection any high-resolution visual imagery in patient areas.”

Similarly, the speech classifier was built specifically to not retain any speech data beyond that someone spoke — can’t leak sensitive data if you never collect any.

The plan for now is to deploy FluSense “in several large public spaces,” one presumes on the UMass campus in order to diversify their data. “We are also looking for funding to run a large-scale multi-city trial,” Rahman said.

In time this could be integrated with other first- and second-hand metrics used in forecasting flu cases. It may not be in time to help much with controlling COVID-19, but it could very well help health authorities plan better for the next flu season, something that could potentially save lives.

Powered by WPeMatico

Google launches the first developer preview of Android 11

Posted by | Android, api, Apps, BlackBerry Priv, computing, dave burke, Google, Google Play, machine learning, Mobile, mobile operating system, operating system, operating systems, PIXEL, smartphones, Software, TC | No Comments

With the days of desert-themed releases officially behind it, Google today announced the first developer preview of Android 11, which is now available as system images for Google’s own Pixel devices, starting with the Pixel 2.

As of now, there is no way to install the updates over the air. That’s usually something the company makes available at a later stage. These first releases aren’t meant for regular users anyway. Instead, they are a way for developers to test their applications and get a head start on making use of the latest features in the operating system.

With Android 11 we’re keeping our focus on helping users take advantage of the latest innovations, while continuing to keep privacy and security a top priority,” writes Google VP of Engineering Dave Burke. “We’ve added multiple new features to help users manage access to sensitive data and files, and we’ve hardened critical areas of the platform to keep the OS resilient and secure. For developers, Android 11 has a ton of new capabilities for your apps, like enhancements for foldables and 5G, call-screening APIs, new media and camera capabilities, machine learning, and more.”

Unlike some of Google’s previous early previews, this first version of Android 11 does actually bring quite a few new features to the table. As Burke noted, there are some obligatory 5G features like a new bandwidth estimate API, for example, as well as a new API that checks whether a connection is unmetered so apps can play higher-resolution video, for example.

With Android 11, Google is also expanding its Project Mainline lineup of updatable modules from 10 to 22. With this, Google is able to update critical parts of the operating system without having to rely on the device manufacturers to release a full OS update. Users simply install these updates through the Google Play infrastructure.

Users will be happy to see that Android 11 will feature native support for waterfall screens that cover a device’s edges, using a new API that helps developers manage interactions near those edges.

Also new are some features that developers can use to handle conversational experiences, including a dedicated conversation section in the notification shade, as well as a new chat bubbles API and the ability to insert images into replies you want to send from the notifications pane.

Unsurprisingly, Google is adding a number of new privacy and security features to Android 11, too. These include one-time permissions for sensitive types of data, as well as updates to how the OS handles data on external storage, which it first previewed last year.

As for security, Google is expanding its support for biometrics and adding different levels of granularity (strong, weak and device credential), in addition to the usual hardening of the platform you would expect from a new release.

There are plenty of other smaller updates as well, including some that are specifically meant to make running machine learning applications easier, but Google specifically highlights the fact that Android 11 will also bring a couple of new features to the OS that will help IT manage corporate devices with enhanced work profiles.

This first developer preview of Android 11 is launching about a month earlier than previous releases, so Google is giving itself a bit more time to get the OS ready for a wider launch. Currently, the release schedule calls for monthly developer preview releases until April, followed by three betas and a final release in Q3 2020.

Powered by WPeMatico