Assistant

WorldGaze uses smartphone cameras to help voice AIs cut to the chase

Posted by | apple inc, artificial intelligence, Assistant, augmented reality, carnegie mellon university, Chris Harrison, Computer Vision, Emerging-Technologies, iPhone, machine learning, Magic Leap, Mobile, siri, smartphone, smartphones, virtual assistant, voice AI, Wearables, WorldGaze | No Comments

If you find voice assistants frustratingly dumb, you’re hardly alone. The much-hyped promise of AI-driven vocal convenience very quickly falls through the cracks of robotic pedantry.

A smart AI that has to come back again (and sometimes again) to ask for extra input to execute your request can seem especially dumb — when, for example, it doesn’t get that the most likely repair shop you’re asking about is not any one of them but the one you’re parked outside of right now.

Researchers at the Human-Computer Interaction Institute at Carnegie Mellon University, working with Gierad Laput, a machine learning engineer at Apple, have devised a demo software add-on for voice assistants that lets smartphone users boost the savvy of an on-device AI by giving it a helping hand — or rather a helping head.

The prototype system makes simultaneous use of a smartphone’s front and rear cameras to be able to locate the user’s head in physical space, and more specifically within the immediate surroundings — which are parsed to identify objects in the vicinity using computer vision technology.

The user is then able to use their head as a pointer to direct their gaze at whatever they’re talking about — i.e. “that garage” — wordlessly filling in contextual gaps in the AI’s understanding in a way the researchers contend is more natural.

So, instead of needing to talk like a robot in order to tap the utility of a voice AI, you can sound a bit more, well, human. Asking stuff like “‘Siri, when does that Starbucks close?” Or — in a retail setting — “are there other color options for that sofa?” Or asking for an instant price comparison between “this chair and that one.” Or for a lamp to be added to your wish-list.

In a home/office scenario, the system could also let the user remotely control a variety of devices within their field of vision — without needing to be hyper-specific about it. Instead they could just look toward the smart TV or thermostat and speak the required volume/temperature adjustment.

The team has put together a demo video (below) showing the prototype — which they’ve called WorldGaze — in action. “We use the iPhone’s front-facing camera to track the head in 3D, including its direction vector. Because the geometry of the front and back cameras are known, we can raycast the head vector into the world as seen by the rear-facing camera,” they explain in the video.

“This allows the user to intuitively define an object or region of interest using the head gaze. Voice assistants can then use this contextual information to make enquiries that are more precise and natural.”

In a research paper presenting the prototype they also suggest it could be used to “help to socialize mobile AR experiences, currently typified by people walking down the street looking down at their devices.”

Asked to expand on this, CMU researcher Chris Harrison told TechCrunch: “People are always walking and looking down at their phones, which isn’t very social. They aren’t engaging with other people, or even looking at the beautiful world around them. With something like WorldGaze, people can look out into the world, but still ask questions to their smartphone. If I’m walking down the street, I can inquire and listen about restaurant reviews or add things to my shopping list without having to look down at my phone. But the phone still has all the smarts. I don’t have to buy something extra or special.”

In the paper they note there is a long body of research related to tracking users’ gaze for interactive purposes — but a key aim of their work here was to develop “a functional, real-time prototype, constraining ourselves to hardware found on commodity smartphones.” (Although the rear camera’s field of view is one potential limitation they discuss, including suggesting a partial workaround for any hardware that falls short.)

“Although WorldGaze could be launched as a standalone application, we believe it is more likely for WorldGaze to be integrated as a background service that wakes upon a voice assistant trigger (e.g., ‘Hey Siri’),” they also write. “Although opening both cameras and performing computer vision processing is energy consumptive, the duty cycle would be so low as to not significantly impact battery life of today’s smartphones. It may even be that only a single frame is needed from both cameras, after which they can turn back off (WorldGaze startup time is 7 sec). Using bench equipment, we estimated power consumption at ~0.1 mWh per inquiry.”

Of course there’s still something a bit awkward about a human holding a screen up in front of their face and talking to it — but Harrison confirms the software could work just as easily hands-free on a pair of smart spectacles.

“Both are possible,” he told us. “We choose to focus on smartphones simply because everyone has one (and WorldGaze could literally be a software update), while almost no one has AR glasses (yet). But the premise of using where you are looking to supercharge voice assistants applies to both.”

“Increasingly, AR glasses include sensors to track gaze location (e.g., Magic Leap, which uses it for focusing reasons), so in that case, one only needs outwards facing cameras,” he added.

Taking a further leap it’s possible to imagine such a system being combined with facial recognition technology — to allow a smart spec-wearer to quietly tip their head and ask “who’s that?” — assuming the necessary facial data was legally available in the AI’s memory banks.

Features such as “add to contacts” or “when did we last meet” could then be unlocked to augment a networking or socializing experience. Although, at this point, the privacy implications of unleashing such a system into the real world look rather more challenging than stitching together the engineering. (See, for example, Apple banning Clearview AI’s app for violating its rules.)

“There would have to be a level of security and permissions to go along with this, and it’s not something we are contemplating right now, but it’s an interesting (and potentially scary idea),” agrees Harrison when we ask about such a possibility.

The team was due to present the research at ACM CHI — but the conference was canceled due to the coronavirus.

Powered by WPeMatico

Google said to be preparing its own chips for use in Pixel phones and Chromebooks

Posted by | Apple, Assistant, chrome os, chromebook, computers, computing, Gadgets, Google, hardware, Intel, iPhone, laptops, mac, machine learning, photo processing, PIXEL, Qualcomm, Samsung, smartphone, smartphones, TC | No Comments

Google is reportedly on the verge of stepping up their hardware game in a way that follows the example set by Apple, with custom-designed silicon powering future smartphones. Axios reports that Google is readying its own in-house processors for use in future Pixel devices, including both phones and eventually Chromebooks, too.

Google’s efforts around its own first-party hardware have been somewhat of a mixed success, with some generations of Pixel smartphone earning high praise, including for its work around camera software and photo processing. But it has used standard Qualcomm processors to date, whereas Apple has long designed its own custom processor (the A-series) for its iPhone, providing the Mac-maker an edge when it comes to performance tailor-made for its OS and applications.

The Axios report says that Google’s in-house chip is code-named “Whitechapel,” and that it was made in collaboration with Samsung and uses that company’s 5-nanometer process. It includes an 8-core ARM-based processor, as well as dedicated on-chip resources for machine learning and Google Assistant.

Google has already taken delivery of the first working prototypes of this processor, but it’s said to be at least a year before they’ll be used in actual shipping Pixel phones, which means we likely have at least one more generation of Pixel that will include a third-party processor. The report says that this will eventually make its way to Chromebooks, too, if all goes to plan, but that that will take longer.

Rumors have circulated for years now that Apple would eventually move its own Mac line to in-house, ARM-based processors, especially as the power and performance capabilities of its A-series chips has scaled and surpassed those of its Intel equivalents. ARM-based Chromebooks already exist, so that could make for an easier transition on the Google side – provided the Google chips can live up to expectations.

Powered by WPeMatico

American Airlines starts trialing Google Nest Hubs as translators in its lounges

Posted by | Airline Industry, Airlines, American-Airlines, Assistant, aviation, brand management, CES 2020, customer experience, Gadgets, Google, Japan Airlines, TC, United States | No Comments

Delta is keynoting CES today and launching a slew of updates to its digital services. Its competitors don’t want to be left behind, of course, so it’s probably no surprise that American Airlines also made a small but nifty tech announcement today. In partnership with Google, American will start trialing Google Nest Hubs and the Google Assistant interpreter mode in its airport lounges, starting at Los Angeles International Airport this week.

The idea here is to make it easier for the company’s customer service teams to provide personalized service to its customers when no multilingual representative is available. Because the interpreter mode supports 29 languages, including the likes of Arabic, French, German, Japanese, Russian, Spanish and Vietnamese, the Assistant should be able to help in most cases.

“The science fiction universal translator is now science fact,” said Maya Leibman, American’s chief information officer. “Incorporating technology like the Google Assistant’s interpreter mode will help us break down barriers, provide a worry-free travel experience and make travel more accessible to all.”

While this isn’t exactly a groundbreaking new airline experience, what we’re seeing here is how the airline industry is now starting to see technology as a way to differentiate. There is only so much you can do once a customer has boarded (though a good seat, meal and friendly service sure help). What the airlines want to do, though, is extend their relationship with their customers beyond that initial booking experience and the flying experience, with more proactive services through its mobile apps and other touchpoints. That’s pretty clear from Delta’s announcements today, and the rest of the industry is pushing in the same direction.

CES 2020 coverage - TechCrunch

Powered by WPeMatico

Brilliant adds a dimmer switch and smart plug to its smart home ecosystem

Posted by | Assistant, belkin wemo, Bluetooth, CES 2020, ecobee, electronics manufacturing, energy efficiency, Gadgets, Google, hardware, Home Automation, Honeywell, kwikset, lifx, lighting, operating systems, Sonos, TC, wemo | No Comments

Until now, Brilliant only offered its relatively high-end smart switches with a touchscreen, but at CES this week, the company is expanding its product lineup with a new dimmer switch and smart plug. Both require that you already own at least one Brilliant Control, so these aren’t standalone devices but instead expansions to the Brilliant Control system.

The main advantage here is that once you have bought into the Brilliant system for your smart home setup, you won’t need to get a new Brilliant Control for every room. Because the Controls start at $299 for a single switch, that would be a very pricey undertaking. At $69.99, the dimmer is competitively priced (and offers a discount for bundles with multiple switches), as is the plug, at $29.99. This will surely make the overall Brilliant system more attractive to a lot of people.I’ve tested the Control in my house for the last few weeks and came away impressed, mostly because it brings a single, flexible physical control system to the disparate smart plugs, locks and other gadgets I’ve accumulated over the last year or so. I couldn’t imagine getting one for every room, though, as that would simply be far too expensive. Brilliant’s system works with Alexa and Google Assistant, and includes third-party integrations with companies like Philips Hue, LIFX, TP-Link Lutron, Wemo, Ecobee, Honeywell, August, Kwikset, Schlage, Ring, Sonos and others. The different Brilliant devices communicate over a Bluetooth Mesh and connect to the internet over Wi-Fi.

“Before Brilliant, an integrated whole-home smart home and lighting system meant either spending tens of thousands of dollars on an inflexible home automation system, or piecing together a jumble of disparate devices and apps,” said Aaron Emigh, co-founder and CEO of Brilliant. “With our new smart switch and plug-in combination with the Brilliant Control, we are realizing our mission to make it possible for every homeowner to experience the comfort, energy efficiency, safety and convenience of living in a true smart home.”

One nice feature of the dimmer is that it includes a motion sensor, which will allow for a lot of interesting usage scenarios. You’ll also be able to double-tap the switch to trigger a smart home or lighting scene.

The plug is obviously more straightforward. It’s a Wemo-style plug that you simply plug in. Unlike Brilliant’s devices, which require that you either have to be comfortable with doing some very basic electrical work yourself (and Brilliant offers very straightforward instructions) or have somebody install it for you, this one is indeed plug and play.

Both the plug and dimmer switch are now available for pre-order and will ship in Q1 2020.

CES 2020 coverage - TechCrunch

Powered by WPeMatico

BMW says ‘ja’ to Android Auto

Posted by | Alexa, Android, Android Auto, artificial intelligence, Assistant, automotive, BMW, business model, CarPlay, computing, Cortana, Dieter May, digital services, Google, Microsoft, Mobile, operating systems, smartphones | No Comments

BMW today announced that it is finally bringing Android Auto to its vehicles, starting in July 2020. With that, it will join Apple’s CarPlay in the company’s vehicles.

The first live demo of Android Auto in a BMW will happen at CES 2020 next month. After that, it will become available as an update to drivers in 20 countries with cars that feature the BMW OS 7.0. BMW will support Android Auto over a wireless connection, though, which somewhat limits its comparability.

Only two years ago, the company said that it wasn’t interested in supporting Android Auto. At the time, Dieter May, who was then the senior VP for Digital Services and Business Model, explicitly told me that the company wanted to focus on its first-party apps in order to retain full control over the in-car interface and that he wasn’t interested in seeing Android Auto in BMWs. May has since left the company, though it’s also worth noting that Android Auto itself has become significantly more polished over the course of the last two years.

“The Google Assistant on Android Auto makes it easy to get directions, keep in touch and stay productive. Many of our customers have pointed out the importance to them of having Android Auto inside a BMW for using a number of familiar Android smartphone features safely without being distracted from the road, in addition to BMW’s own functions and services,” said Peter Henrich, senior vice president Product Management BMW, in today’s announcement.

With this, BMW will also finally offer support for the Google Assistant after early bets on Alexa, Cortana and the BMW Assistant (which itself is built on top of Microsoft’s AI stack). The company has long said it wants to offer support for all popular digital assistants. For the Google Assistant, the only way to make that work, at least for the time being, is Android Auto.

In BMWs, Android Auto will see integrations into the car’s digital cockpit, in addition to BMW’s Info Display and the heads-up display (for directions). That’s a pretty deep integration, which goes beyond what most car manufacturers feature today.

“We are excited to work with BMW to bring wireless Android Auto to their customers worldwide next year,” said Patrick Brady, vice president of engineering at Google. “The seamless connection from Android smartphones to BMW vehicles allows customers to hit the road faster while maintaining access to all of their favorite apps and services in a safer experience.”

Powered by WPeMatico

Google Assistant gets a customized alarm, based on weather and time

Posted by | artificial intelligence, Assistant, Gadgets, Google, Google Assistant, lenovo | No Comments

Alarm clocks were one of the most obvious implementations since the introduction of the smart screen. Devices like Lenovo’s Smart Clock and the Amazon Echo Show 5 have demonstrated some interesting features in the bedside display form factor, and Google has worked with the former to refine the experience.

This morning, the company introduced a handful of features to refine the experience. “Impromptu” is an interesting new addition to the portfolio that constructs a customized alarm based on a series of factors, including weather and time of day.

Here’s what a 50-degree, early-morning wake-up sounds like:


Not a bad thing to wake up to. A little Gershwin-esque, perhaps. 

Per a blog post that went up this morning, the alarm ringtone is based on the company’s open-source project, Magenta. Google AI describes it thusly:

Magenta was started by researchers and engineers from the Google Brain team, but many others have contributed significantly to the project. We develop new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it’s also an exploration in building smart tools and interfaces that allow artists and musicians to extend their processes using these models. We use TensorFlow and release our models and tools in open source on our GitHub.

The new feature rolls out today.

Powered by WPeMatico

Sonos acquires voice assistant startup Snips, potentially to build out on-device voice control

Posted by | Amazon, Assistant, computing, consumer electronics, Gadgets, Google, hardware, ikea, M&A, machine learning, play:3, smart speakers, Snips, Sonos, Sonos Beam, TC, virtual assistant | No Comments

Sonos revealed during its quarterly earnings report that it has acquired voice assistant startup Snips in a $37 million cash deal, Variety reported on Wednesday. Snips, which had been developing dedicated smart device assistants that can operate primarily locally, instead of relying on consistently round-tripping voice data to the cloud, could help Sonos set up a voice control option for its customers that has “privacy in mind” and is focused more narrowly on music control than on being a general-purpose smart assistant.

Sonos has worked with both Amazon and Google and their voice assistants, providing support for either on their more recent products, including the Sonos Beam and Sonos One smart speakers. Both of these require an active cloud connection to work, however, and have received scrutiny from consumers and consumer protection groups recently for how they handle the data they collect from users. They’ve introduced additional controls to help users navigate their own data sharing, but Sonos CEO Patrick Spence noted in an interview with Variety that one of the things the company can do in building its own voice features is developing them “with privacy in mind.”

Notably, Sonos has introduced a version of its Sonos One that leaves out the microphone hardware altogether — the Sonos One SL introduced earlier this fall. The fact that they saw opportunity in a mic-less second version of the Sonos One suggests it’s likely there are a decent number of customers who like the option of a product that’s not round-tripping any information with a remote server. Spence also seemed quick to point out that Sonos wouldn’t seek to compete with its voice assistant partners, however, since anything they build will be focused much more specifically on music.

You can imagine how local machine learning would be able to handle commands like skipping, pausing playback and adjusting volume (and maybe an even more advanced feature like playing back a saved playlist), without having to connect to any kind of cloud service. It seems like what Spence envisions is something like that which can provide basic controls, while still allowing the option for a customer to enable one of the more full-featured voice assistants depending on their preference.

Meanwhile, partnerships continue to prove lucrative for Sonos: Its team-up with Ikea resulted in 30,000 speakers sold on launch day, the company also shared alongside its earnings. That’s a lot to move in one day, especially in this category.

Powered by WPeMatico

The iRobot Roomba s9+ and Braava m6 are the robots you should trust to clean your house well

Posted by | Assistant, contents, Gadgets, Google, hardware, home, home appliances, Home Automation, iRobot, Reviews, robot, robotic vacuum cleaner, robotics, Roomba, samsung galaxy s9, smart devices, TC, Vacuum | No Comments

This holiday season, we’re going to be looking back at some of the best tech of the past year, and providing fresh reviews in a sort of ‘greatest hits’ across a range of categories. First up: iRobot’s top-end home cleaning robots, the Roomba s9+ robot vacuum, and the Braava m6 robot mop and floor sweeper. Both of these represent the current peak of iRobot’s technology, and while that shows up in the price tag, it also shows up in performance.

iRobot Roomba S9+

The iRobot Roomba S9+ is actually two things: The Roomba S9, which is available separately, and the Clean Base that enables the vacuum to empty itself after a run, giving you many cleanings before it needs you to actually open up a bin or replace a bag. Both the vacuum and its base are WiFi-connected, and controllable via iRobot’s app, as well as Google Assistant and Alexa. Combined, it’s the most advanced autonomous home vacuum you can get, and it manages to outperform a lot of older or less sophisticated robot vacuums even in situations that have historically been hard for this kind of tech to handle.

Like the Roomba S7 before it (which is still available and still also a great vacuum, for a bit less money), the S9 uses what’s called SLAM (Simultaneous Localization and Mapping), and a specific variant of that called vSLAM (the stands for ‘visual’). This technology means that as it works, it’s generating and adapting a map of your home to ensure that it can clean more effectively and efficiently.

After either a few dedicated training runs (which you can opt to send the vacuum on when it’s learning a new space) or a few more active vacuum runs, the Roomba S9 will remember your home’s layout, and provide a map that you can customize with room dividers and labels. This then turns on the vacuum’s real smart superpowers, which include being able to vacuum just specific rooms on command, as well as features like letting it easily pick up where it left off if it needs to return to its charging station mid-run. With the S9 and its large battery, the vacuum can do an entire run of my large two-bedroom condo on a single charge (the i7 I used previously needed two charges to finish up).

The S9’s vSLAM and navigation systems seem incredibly well-developed in my use: I’ve never once had the vacuum become stuck, or confused by changes in floor colouring, even going from a very light to a very dark floor (this is something that past vacuums have had difficulty with). It infallibly finds its way back to the Clean Base, and also never seems to be flummoxed by even drastic changes in lighting over the course of the day.

So it’s smart, but does it suck? Yes, it does – in the best possible way. Just like it doesn’t require stops to charge up, it also manages to clean my entire space with just one bin. There’s a lot more room in here thanks to the new design, and it handles even my dog’s hair with ease (my dog sheds a lot, and it’s very obvious light hair against dark wood floors). The new angled design on the front of the vacuum means it does a better job with getting in corners than previous fully round designs, and that shows, because corners are were clumps of hair go to gather in a dog-friendly household.

The ‘+’ in the S9+ is that Clean Base as I mentioned – think of it like the tower of lazy cleanliness. The base has a port that sucks dirt from the S9 when it’s done a run, shooting it into a bag in the top of the tower that can hold up to 30 full bins of dirt. That ends up being a lot in practice – it should last you months, depending on house size. Replacement bags cost $20 for three, which is probably what you’ll go through in a year, so it’s really a negligible cost for the convenience you’re getting.

Braava m6

The Roomba S9’s best friend, if you will, is the Braava m6. This is iRobot’s latest and greatest smart mop, which is exactly what it sounds like: Whereas Roomba vacuums, the Braava uses either single use disposable, or microfibre washable/reusable pads, as well as iRobot’s own cleaning fluid, to clean hardwood, tile, vinyl, cork and other hard surface floors once the vacuuming is done. It can also just run a dry sweep, which is useful for picking up dust and pet hair, as a finishing touch on the vacuum’s run.

iRobot has used its unique position in offering both of these types of smart devices to have them work together – if you have both the S9 and the Braava m6 added to your iRobot Home app, you’ll get an option to mop the floors right after the vacuum job is complete. It’s an amazing convenience feature, and one that works fairly well – but there are some differences in the smarts powering the Braava m6 and the Roomba s9 that lead to some occasional challenges.

The Braava m6 doesn’t seem to be quite as capable when it comes to mapping and navigating its surroundings. My condo layout is relatively simple, all one level with no drops or gaps. But the m6 has encountered some scenarios where it doesn’t seem to be able to cross a threshold or make sense of all floor types. Based on error messages, it seems like it’s identifying some surfaces as ‘cliffs’ or steep drops when transitioning back from lighter floors to darker ones.

What this means in practice is that a couple of times per run, I have to reposition the Braava manually. There are ways to solve for this, however, built into the software: Thanks to the smart mapping feature, I can just direct the Braava to focus only on the rooms with dark hardwood, or I can just adjust it when I get an alert that it’s having difficulty. It’s still massively more convenient than mopping by hand, and typically the m6 does about 90 percent of the apartment before it runs into difficult in one of these few small trouble areas.

If you’ve read online customer reviews fo the m6, you may also have seen complaints that it can leave tire marks on dark floors. I found that to be true – but with a few caveats. They definitely aren’t as pronounced as I expected based on some of the negative reviews out there, and I have very dark floors. They also only are really visible in direct sunlight, and then only faintly. They also fade pretty quickly, which means you won’t notice them most of the time if you’re mopping only once ever few vacuum runs. In the end, it’s something to be aware of, but for me it’s not a dealbreaker – far from it. The m6 still does a fantastic job overall of mopping and sweeping, and saves me a ton of labor on what is normally a pretty back-hostile manual task.

Bottom line

These iRobot home cleaning gadgets are definitely high-end, with the s9 starting at $1,099.99 ($1,399.99 with the cleaning base) and the m6 staring at $499.99. You can get a bundle with both staring at $1439.98, but even that is still a lot for cleaning appliances. This is definitely a case where the ‘you get what you pay for’ maxim proves true, however. Either rate s9+ alone, or the combo of the vacuum and mop represent a huge convenience, especially when used on a daily or similar regular schedule, vs. doing the same thing manually. The s9 also frankly does a better job than I ever could wth my own manual vacuum, since it’s much better at getting into corners, under couches, and cleaning along and under trip thanks to its spinning brush. And asking Alexa to have Roomba start a cleaning run feels like living in the future in the best possible way.

Powered by WPeMatico

The 7 most important announcements from Microsoft Ignite

Posted by | Android, Assistant, AWS, Bing, chromium, cloud computing, cloud infrastructure, computing, Cortana, Developer, Enterprise, GitHub, Google, google cloud, linux, machine learning, Microsoft, Microsoft Ignite 2019, microsoft windows, San Francisco, Satya Nadella, TC, voice assistant, Windows 10, Windows Phone | No Comments

It’s Microsoft Ignite this week, the company’s premier event for IT professionals and decision-makers. But it’s not just about new tools for role-based access. Ignite is also very much a forward-looking conference that keeps the changing role of IT in mind. And while there isn’t a lot of consumer news at the event, the company does tend to make a few announcements for developers, as well.

This year’s Ignite was especially news-heavy. Ahead of the event, the company provided journalists and analysts with an 87-page document that lists all of the news items. If I counted correctly, there were about 175 separate announcements. Here are the top seven you really need to know about.

Azure Arc: you can now use Azure to manage resources anywhere, including on AWS and Google Cloud

What was announced: Microsoft was among the first of the big cloud vendors to bet big on hybrid deployments. With Arc, the company is taking this a step further. It will let enterprises use Azure to manage their resources across clouds — including those of competitors like AWS and Google Cloud. It’ll work for Windows and Linux Servers, as well as Kubernetes clusters, and also allows users to take some limited Azure data services with them to these platforms.

Why it matters: With Azure Stack, Microsoft already allowed businesses to bring many of Azure’s capabilities into their own data centers. But because it’s basically a local version of Azure, it only worked on a limited set of hardware. Arc doesn’t bring all of the Azure Services, but it gives enterprises a single platform to manage all of their resources across the large clouds and their own data centers. Virtually every major enterprise uses multiple clouds. Managing those environments is hard. So if that’s the case, Microsoft is essentially saying, let’s give them a tool to do so — and keep them in the Azure ecosystem. In many ways, that’s similar to Google’s Anthos, yet with an obvious Microsoft flavor, less reliance on Kubernetes and without the managed services piece.

Microsoft launches Project Cortex, a knowledge network for your company

What was announced: Project Cortex creates a knowledge network for your company. It uses machine learning to analyze all of the documents and contracts in your various repositories — including those of third-party partners — and then surfaces them in Microsoft apps like Outlook, Teams and its Office apps when appropriate. It’s the company’s first new commercial service since the launch of Teams.

Why it matters: Enterprises these days generate tons of documents and data, but it’s often spread across numerous repositories and is hard to find. With this new knowledge network, the company aims to surface this information proactively, but it also looks at who the people are who work on them and tries to help you find the subject matter experts when you’re working on a document about a given subject, for example.

00000IMG 00000 BURST20180924124819267 COVER 1

Microsoft launched Endpoint Manager to modernize device management

What was announced: Microsoft is combining its ConfigMgr and Intune services that allow enterprises to manage the PCs, laptops, phones and tablets they issue to their employees under the Endpoint Manager brand. With that, it’s also launching a number of tools and recommendations to help companies modernize their deployment strategies. ConfigMgr users will now also get a license to Intune to allow them to move to cloud-based management.

Why it matters: In this world of BYOD, where every employee uses multiple devices, as well as constant attacks against employee machines, effectively managing these devices has become challenging for most IT departments. They often use a mix of different tools (ConfigMgr for PCs, for example, and Intune for cloud-based management of phones). Now, they can get a single view of their deployments with the Endpoint Manager, which Microsoft CEO Satya Nadella described as one of the most important announcements of the event, and ConfigMgr users will get an easy path to move to cloud-based device management thanks to the Intune license they now have access to.

Microsoft’s Chromium-based Edge browser gets new privacy features, will be generally available January 15

What was announced: Microsoft’s Chromium-based version of Edge will be generally available on January 15. The release candidate is available now. That’s the culmination of a lot of work from the Edge team, and, with today’s release, the company is also adding a number of new privacy features to Edge that, in combination with Bing, offers some capabilities that some of Microsoft’s rivals can’t yet match, thanks to its newly enhanced InPrivate browsing mode.

Why it matters: Browsers are interesting again. After years of focusing on speed, the new focus is now privacy, and that’s giving Microsoft a chance to gain users back from Chrome (though maybe not Firefox). At Ignite, Microsoft also stressed that Edge’s business users will get to benefit from a deep integration with its updated Bing engine, which can now surface business documents, too.

hero.44d446c9

You can now try Microsoft’s web-based version of Visual Studio

What was announced: At Build earlier this year, Microsoft announced that it would soon launch a web-based version of its Visual Studio development environment, based on the work it did on the free Visual Studio Code editor. This experience, with deep integrations into the Microsoft-owned GitHub, is now live in a preview.

Why it matters: Microsoft has long said that it wants to meet developers where they are. While Visual Studio Online isn’t likely to replace the desktop-based IDE for most developers, it’s an easy way for them to make quick changes to code that lives in GitHub, for example, without having to set up their IDE locally. As long as they have a browser, developers will be able to get their work done..

Microsoft launches Power Virtual Agents, its no-code bot builder

What was announced: Power Virtual Agents is Microsoft’s new no-code/low-code tool for building chatbots. It leverages a lot of Azure’s machine learning smarts to let you create a chatbot with the help of a visual interface. In case you outgrow that and want to get to the actual code, you can always do so, too.

Why it matters: Chatbots aren’t exactly at the top of the hype cycle, but they do have lots of legitimate uses. Microsoft argues that a lot of early efforts were hampered by the fact that the developers were far removed from the user. With a visual too, though, anybody can come in and build a chatbot — and a lot of those builders will have a far better understanding of what their users are looking for than a developer who is far removed from that business group.

Cortana wants to be your personal executive assistant and read your emails to you, too

What was announced: Cortana lives — and it now also has a male voice. But more importantly, Microsoft launched a few new focused Cortana-based experiences that show how the company is focusing on its voice assistant as a tool for productivity. In Outlook on iOS (with Android coming later), Cortana can now read you a summary of what’s in your inbox — and you can have a chat with it to flag emails, delete them or dictate answers. Cortana can now also send you a daily summary of your calendar appointments, important emails that need answers and suggest focus time for you to get actual work done that’s not email.

Why it matters: In this world of competing assistants, Microsoft is very much betting on productivity. Cortana didn’t work out as a consumer product, but the company believes there is a large (and lucrative) niche for an assistant that helps you get work done. Because Microsoft doesn’t have a lot of consumer data, but does have lots of data about your work, that’s probably a smart move.

GettyImages 482028705 1

SAN FRANCISCO, CA – APRIL 02: Microsoft CEO Satya Nadella walks in front of the new Cortana logo as he delivers a keynote address during the 2014 Microsoft Build developer conference on April 2, 2014 in San Francisco, California (Photo by Justin Sullivan/Getty Images)

Bonus: Microsoft agrees with you and thinks meetings are broken — and often it’s the broken meeting room that makes meetings even harder. To battle this, the company today launched Managed Meeting Rooms, which for $50 per room/month lets you delegate to Microsoft the monitoring and management of the technical infrastructure of your meeting rooms.

Powered by WPeMatico

New Nvidia Shield Android TV streaming device leaks via Amazon listing

Posted by | Amazon, Android, artificial intelligence, Assistant, computing, Federal Communications Commission, Gadgets, Google, hardware, nvidia, nvidia shield, Portable Media Players, set top box, Shield Portable, Streaming Media, streaming media player, TC, tegra | No Comments

The fact that Nvidia is updating its Shield TV hardware has already been telegraphed via an FCC filing, but a leak earlier today paints much more of a detailed picture. An Amazon listing for a new Nvidia Shield Pro set-top streaming device went live briefly before being taken down, showing a familiar hardware design and a new remote control and listing some of the forthcoming feature updates new to this generation of hardware.

The listing, captured by the eagle-eyed Android TV Rumors and shared via Twitter, includes a $199.99 price point, specs that include 3GB of RAM, 2x USB ports, a new Nvidia Tegra X1+ chip and 16GB of on-board storage. In addition to the price, the Amazon listing had a release date for the new hardware of October 28.

If this Amazon page is accurate (and it looks indeed like an official product page that one would expect from Nvidia), the new Shield TV’s processor will be “up to 25% faster than the previous generation,” and will offer “next-generation AI upscaling” for improving the quality of HD video on 4K-capable displays.

It’ll offer support for Dolby Vision HDR, plus surround sound with Dolby Atmos support, and provide “the most 4K HDR content of any streaming media player.” There’s also built-in Google Assistant support, which was offered on the existing hardware, and it’ll work with Alexa for hands-free control.

The updated @Nvisia Shield TV is on https://t.co/er7yzgQKLY ?
Comes with Dolby Vison, new Tegra X1 + processorhttps://t.co/IXOFlEvVCS pic.twitter.com/TGEWEZD2zM

Android TV Rumors (@androidtv_rumor) October 17, 2019

The feature photos for the listing show a new remote control, which has a pyramid-like design, as well as a lot more dedicated buttons on the face. There’s backlighting, and an IR blaster for TV control, as well as a “built-in lost remote locator” according to the now-removed Amazon page.

This Amazon page certainly paints a comprehensive picture of what to expect, and it looks like a compelling update to be sure. The listing is gone now, however, so stay tuned to find out if this is indeed the real thing, and if this updated streamer will indeed be available soon.

UPDATE: Yet another Nvidia leak followed the first, this time through retailer Newegg (via The Verge). This is different, however, and features a Shield TV device (no “Pro” in the name) that has almost all the same specs, but a much smaller design that includes a microSD card, and seems to have half the amount of on-board storage (8GB versus 16GB) and a retail price of around $150.

nvidia.0

Powered by WPeMatico