artificial intelligence

NASA’s Parker Solar Probe launches tonight to ‘touch the sun’

Posted by | artificial intelligence, Gadgets, Government, hardware, NASA, parker solar probe, science, Space, TC | No Comments

NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:33 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.

If you’re up late tonight (technically tomorrow morning), you can watch the launch live on NASA’s stream.

This is the first mission named after a living researcher, in this case Eugene Parker, who in the ’50s made a number of proposals and theories about the way that stars give off energy. He’s the guy who gave us solar wind, and his research was hugely influential in the study of the sun and other stars — but it’s only now that some of his hypotheses can be tested directly. (Parker himself visited the craft during its construction, and will be at the launch. No doubt he is immensely proud and excited about this whole situation.)

“Directly” means going as close to the sun as technology allows — which leads us to the PSP’s first major innovation: its heat shield, or thermal protection system.

There’s one good thing to be said for the heat near the sun: it’s a dry heat. Because there’s no water vapor or gases in space to heat up, find some shade and you’ll be quite comfortable. So the probe is essentially carrying the most heavy-duty parasol ever created.

It’s a sort of carbon sandwich, with superheated carbon composite on the outside and a carbon foam core. All together it’s less than a foot thick, but it reduces the temperature the probe’s instruments are subjected to from 2,500 degrees Fahrenheit to 85 — actually cooler than it is in much of the U.S. right now.

Go on – it’s quite cool.

The car-sized Parker will orbit the sun and constantly rotate itself so the heat shield is facing inward and blocking the brunt of the solar radiation. The instruments mostly sit behind it in a big insulated bundle.

And such instruments! There are three major experiments or instrument sets on the probe.

WISPR (Wide-Field Imager for Parker Solar Probe) is a pair of wide-field telescopes that will watch and image the structure of the corona and solar wind. This is the kind of observation we’ve made before — but never from up close. We generally are seeing these phenomena from the neighborhood of the Earth, nearly 100 million miles away. You can imagine that cutting out 90 million miles of cosmic dust, interfering radiation and other nuisances will produce an amazingly clear picture.

SWEAP (Solar Wind Electrons Alphas and Protons investigation) looks out to the side of the craft to watch the flows of electrons as they are affected by solar wind and other factors. And on the front is the Solar Probe Cup (I suspect this is a reference to the Ray Bradbury story, “Golden Apples of the Sun”), which is exposed to the full strength of the sun’s radiation; a tiny opening allows charged particles in, and by tracking how they pass through a series of charged windows, they can sort them by type and energy.

FIELDS is another that gets the full heat of the sun. Its antennas are the ones sticking out from the sides — they need to in order to directly sample the electric field surrounding the craft. A set of “fluxgate magnetometers,” clearly a made-up name, measure the magnetic field at an incredibly high rate: two million samples per second.

They’re all powered by solar panels, which seems obvious, but actually it’s a difficult proposition to keep the panels from overloading that close to the sun. They hide behind the shield and just peek out at an oblique angle, so only a fraction of the radiation hits them.

Even then, they’ll get so hot that the team needed to implement the first-ever active water cooling system on a spacecraft. Water is pumped through the cells and back behind the shield, where it is cooled by, well, space.

The probe’s mission profile is a complicated one. After escaping the clutches of the Earth, it will swing by Venus, not to get a gravity boost, but “almost like doing a little handbrake turn,” as one official described it. It slows it down and sends it closer to the sun — and it’ll do that seven more times, each time bringing it closer and closer to the sun’s surface, ultimately arriving in a stable orbit 3.83 million miles above the surface — that’s 95 percent of the way from the Earth to the sun.

On the way it will hit a top speed of 430,000 miles per hour, which will make it the fastest spacecraft ever launched.

Parker will make 24 total passes through the corona, and during these times communication with Earth may be interrupted or impractical. If a solar cell is overheating, do you want to wait 20 minutes for a decision from NASA on whether to pull it back? No. This close to the sun even a slight miscalculation results in the reduction of the probe to a cinder, so the team has imbued it with more than the usual autonomy.

It’s covered in sensors in addition to its instruments, and an onboard AI will be empowered to make decisions to rectify anomalies. That sounds worryingly like a HAL 9000 situation, but there are no humans on board to kill, so it’s probably okay.

The mission is scheduled to last seven years, after which time the fuel used to correct the craft’s orbit and orientation is expected to run out. At that point it will continue as long as it can before drift causes it to break apart and, one rather hopes, become part of the sun’s corona itself.

The Parker Solar Probe is scheduled for launch early Saturday morning, and we’ll update this post when it takes off successfully or, as is possible, is delayed until a later date in the launch window.

Powered by WPeMatico

Autonomous drones could herd birds away from airports

Posted by | artificial intelligence, drones, Gadgets, machine learning, TC | No Comments

Bird strikes on aircraft may be rare, but not so rare that airports shouldn’t take precautions against them. But keeping birds away is a difficult proposition: How do you control the behavior of flocks of dozens or hundreds of birds? Perhaps with a drone that autonomously picks the best path to do so, like this one developed by CalTech researchers.

Right now airports may use manually piloted drones, which are expensive and of course limited by the number of qualified pilots, or trained falcons — which as you might guess is a similarly difficult method to scale.

Soon-Jo Chung at CalTech became interested in the field after seeing the near-disaster in 2009 when US Airways 1549 nearly crashed due to a bird strike but was guided to a comparatively safe landing in the Hudson.

“It made me think that next time might not have such a happy ending,” he said in a CalTech news release. “So I started looking into ways to protect airspace from birds by leveraging my research areas in autonomy and robotics.”

A drone seems like an obvious solution — put it in the air and send those geese packing. But predicting and reliably influencing the behavior of a flock is no simple matter.

“You have to be very careful in how you position your drone. If it’s too far away, it won’t move the flock. And if it gets too close, you risk scattering the flock and making it completely uncontrollable,” Chung said.

The team studied models of how groups of animals move and affect one another and arrived at their own that described how birds move in response to threats. From this can be derived the flight path a drone should follow that will cause the birds to swing aside in the desired direction but not panic and scatter.

Armed with this new software, drones were deployed in several spaces with instructions to deter birds from entering a given protected area. As you can see below (an excerpt from this video), it seems to have worked:

More experimentation is necessary, of course, to tune the model and get the system to a state that is reliable and works with various sizes of flocks, bird airspeeds, and so on. But it’s not hard to imagine this as a standard system for locking down airspace: a dozen or so drones informed by precision radar could protect quite a large area.

The team’s results are published in IEEE Transactions on Robotics.

Powered by WPeMatico

There’s more: Google is also said to be developing a censored news app for China

Posted by | Android, app-store, Apps, artificial intelligence, Asia, Beijing, bytedance, censorshit, China, cloning, computing, Facebook, files go, Google, internet service, JD.com, search app, search engine, Software, Tencent, Toutiao, United States, WeChat, world wide web | No Comments

Can Google’s week get any worse? Less than a day after the revelation that it is planning a censored search engine for China, so comes another: the U.S. firm is said to be developing a government-friendly news app for the country, where its search engine and other services remain blocked.

That’s according to The Information which reports that Google is essentially cloning Toutiao, the hugely popular app from new media startup ByteDance, in a bid to get back into the country and the minds of its 700 million mobile internet users. Like Toutiao, the app would apparently use AI and algorithms to serve stories to readers — as opposed to real-life human editors — while it too would be designed to work within the bounds of Chinese internet censorship.

That last part is interesting because ByteDance and other news apps have gotten into trouble from the government for failing to adequately police the content shared on their platforms. That’s resulted in some app store suspensions, but the saga itself is a rite of passage for any internet service that has gained mainstream option, so there’s a silver lining in there. But the point for Google is that policing this content is not as easy as it may seem.

The Information said the news app is slated for release before the search app, the existence of which was revealed yesterday, but sources told the publication that the ongoing U.S.-China trade war has made things complicated. Specifically, Google executives have “struggled to further engage” China’s internet censor, a key component for the release of an app in China from an overseas company.

There’s plenty of context to this, as I wrote yesterday:

The Intercept’s report comes less than a week after Facebook briefly received approval to operate a subsidiary on Chinese soil. Its license was, however, revoked as news of the approval broke. The company said it had planned to open an innovation center, but it isn’t clear whether that will be possible now.

Facebook previously built a censorship-friendly tool that could be deployed in China.

While its U.S. peer has struggled to get a read on China, Google has been noticeably increasing its presence in the country over the past year or so.

The company has opened an AI lab in Beijing, been part of investment rounds for Chinese companies, including a $550 million deal with JD.com, and inked a partnership with Tencent. It has also launched products, with a file management service for Android distributed via third-party app stores and, most recently, its first mini program for Tencent’s popular WeChat messaging app.

As for Google, the company pointed us to the same statement it issued yesterday:

We provide a number of mobile apps in China, such as Google Translate and Files Go, help Chinese developers, and have made significant investments in Chinese companies like JD.com. But we don’t comment on speculation about future plans.

Despite two-for-one value on that PR message, this is a disaster. Plotting to collude with governments to censor the internet never goes down well, especially in double helpings.

Powered by WPeMatico

JBL’s $250 Google Assistant smart display is now available for pre-order

Posted by | Android, artificial intelligence, chromecast, computing, Gadgets, Google, Google Assistant, hardware, JBL, lenovo, LG, TC | No Comments

It’s been a week since Lenovo’s Google Assistant-powered smart display went on sale. Slowly but surely, its competitors are launching their versions, too. Today, JBL announced that its $249.95 JBL Link View is now available for pre-order, with an expected ship date of September 3, 2018.

JBL went for a slightly different design than Lenovo (and the upcoming LG WK9), but in terms of functionality, these devices are pretty much the same. The Link View features an 8-inch HD screen; unlike Lenovo’s Smart Display, JBL is not making a larger 10-inch version. It’s got two 10W speakers and the usual support for Bluetooth, as well as Google’s Chromecast protocol.

JBL says the unit is splash proof (IPX4), so you can safely use it to watch YouTube recipe videos in your kitchen. It also offers a 5MP front-facing camera for your video chats and a privacy switch that lets you shut off the camera and microphone.

JBL, Lenovo and LG all announced their Google Assistant smart displays at CES earlier this. Lenovo was the first to actually ship a product, and both the hardware as well as Google’s software received a positive reception. There’s no word on when LG’s WK9 will hit the market.

Powered by WPeMatico

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors

Posted by | artificial intelligence, Gadgets, OpenAI, robotics, science | No Comments

Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.

Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.

Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.

The solution, OpenAI’s researchers felt, was not to use human data at all. Instead, they let the computer try and fail over and over in a simulation, slowly learning how to move its fingers so that the object in its grasp moves as desired.

The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)

In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.

They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.

The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.

What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.

This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.

As with OpenAI’s other work, the paper describing the results is freely available, as are some of the tools they used to create and test Dactyl.

Powered by WPeMatico

SmartArm’s AI-powered prosthesis takes the prize at Microsoft’s Imagine Cup

Posted by | artificial intelligence, Gadgets, hardware, imagine cup, Microsoft, Prosthetics, robotics, TC | No Comments

A pair of Canadian students making a simple, inexpensive prosthetic arm have taken home the grand prize at Microsoft’s Imagine Cup, a global startup competition the company holds yearly. SmartArm will receive $85,000, a mentoring session with CEO Satya Nadella, and some other Microsoft goodies. But they were far from the only worthy team from the dozens that came to Redmond to compete.

The Imagine Cup is an event I personally look forward to, because it consists entirely of smart young students, usually engineers and designers themselves (not yet “serial entrepreneurs”) and often aiming to solve real-world problems.

In the semi-finals I attended, I saw a pair of young women from Pakistan looking to reduce stillbirth rates with a new pregnancy monitor, an automated eye-checking device that can be deployed anywhere and used by anyone, and an autonomous monitor for water tanks in drought-stricken areas. When I was their age, I was living at my mom’s house, getting really good at Mario Kart for SNES and working as a preschool teacher.

Even Nadella bowed before their ambitions in his appearance on stage at the final event this morning.

“Last night I was thinking, ‘What advice can I give people who have accomplished so much at such a young age?’ And I said, I should go back to when I was your age and doing great things. Then I realized…I definitely wouldn’t have made these finals.”

That got a laugh, but (with apologies to Nadella) it’s probably true. Students today have unbelievable resources available to them and as many of the teams demonstrated, they’re making excellent use of those resources.

Congratulations to Team smartARM from #Canada, champion of today’s #ImagineCup! Watch the live show on demand at https://t.co/BLxnJ9FGxJ 🏆pic.twitter.com/86itWke2du

— Microsoft Imagine (@MSFTImagine) July 25, 2018

SmartArm in particular combines a clever approach with state of the art tech in a way that’s so simple it’s almost ridiculous.

The issue they saw as needing a new approach is prosthetic arms, which as they pointed out are often either non-functional (think just a plastic arm or simple flexion-based gripper) or highly expensive (a mechanical arm might cost tens of thousands). Why can’t one be both?

Their solution is an extremely interesting and timely one: a relatively simply actuated 3D-printed forearm and hand that has its own vision system built in. A camera built into the palm captures an image of the item the user aims to pick up, and quickly classifies it — an apple, a key ring, a pen — and selects the correct grip for that object.

The user activates the grip by flexing their upper arm muscles, an action that’s detected by a Myo-like muscle sensor (possibly actually a Myo, but I couldn’t tell from the demo). It sends the signal to the arm to activate the hand movement, and the fingers move accordingly.

It’s still extremely limited — you likely can’t twist a doorknob with it, or reliably grip a knife or fork, and so on. But for many everyday tasks it could still be useful. And the idea of putting the camera in the palm is a high-risk, high-reward one. It is of course blocked when you pick up the item, but what does it need to see during that time? You deactivate the grip to put the cup down and the camera is exposed again to watch for the next task.

Bear in mind this is not meant as some kind of serious universal hand replacement. But it provides smart, simple functionality for people who might otherwise have had to use a pincer arm or the like. And according to the team, it should cost less than $100. How that’s possible to do including the arm sensor is unclear to me, but I’m not the one who built a bionic arm so I’m going to defer to them on this. Even if they miss that 50 percent it would still be a huge bargain, honestly.

There’s an optional subscription that would allow the arm to improve itself over time as it learns more about your habits and objects you encounter regularly — this would also conceivably be used to improve other SmartArms as well.

As for how it looks — rather robotic — the team defended it based on their own feedback from amputees: “They’d rather be asked, ‘hey, where did you get that arm?” than ‘what happened to your arm?’ ” But a more realistic-looking set of fingers is also under development.

The team said they were originally looking for venture funding but ended up getting a grant instead; they’ve got interest from a number of Canadian and American institutions already, and winning the Imagine Cup will almost certainly propel them to greater prominence in the field.

My own questions would be on durability, washing, and the kinds of things that really need to be tested in real-world scenarios. What if the camera lens gets dirty or scratched? Will there be color options for people that don’t want to have white “skin” on their arm? What’s the support model? What about insurance?

SmartArm takes the grand prize, but the runners up and some category winners get a bunch of good stuff too. I plan to get in touch with SmartArm and several other teams from the competition to find out more and hear about their progress. I was really quite impressed not just with the engineering prowess but the humanitarianism and thoughtfulness on display this year. Nadella summed it up best:

“One of the things that I always think about is this competition in some sense ups the game, right?” he said at the finals. “People from all over the world are thinking about how do I use technology, how do i learn new concepts, but then more importantly, how do I solve some of these unmet, unarticulated needs? The impact that you all can have is just enormous, the opportunity is enormous. But I also believe there is an amazing sense of responsibility, or a need for responsibility that we all have to collectively exercise given the opportunity we have been given.”

Powered by WPeMatico

Computer vision researchers build an AI benchmark app for Android phones

Posted by | AI, Android, Apps, artificial intelligence, Benchmark, Computer Vision, Developer, Europe, Google, hardware, huawei, MediaTek, Mobile, neural network, neural networks, Qualcomm, RAM, Samsung, Samsung Electronics, smartphones | No Comments

A group of computer vision researchers from ETH Zurich want to do their bit to enhance AI development on smartphones. To wit: They’ve created a benchmark system for assessing the performance of several major neural network architectures used for common AI tasks.

They’re hoping it will be useful to other AI researchers but also to chipmakers (by helping them get competitive insights); Android developers (to see how fast their AI models will run on different devices); and, well, to phone nerds — such as by showing whether or not a particular device contains the necessary drivers for AI accelerators. (And, therefore, whether or not they should believe a company’s marketing messages.)

The app, called AI Benchmark, is available for download on Google Play and can run on any device with Android 4.1 or higher — generating a score the researchers describe as a “final verdict” of the device’s AI performance.

AI tasks being assessed by their benchmark system include image classification, face recognition, image deblurring, image super-resolution, photo enhancement or segmentation.

They are even testing some algorithms used in autonomous driving systems, though there’s not really any practical purpose for doing that at this point. Not yet anyway. (Looking down the road, the researchers say it’s not clear what hardware platform will be used for autonomous driving — and they suggest it’s “quite possible” mobile processors will, in future, become fast enough to be used for this task. So they’re at least prepped for that possibility.)

The app also includes visualizations of the algorithms’ output to help users assess the results and get a feel for the current state-of-the-art in various AI fields.

The researchers hope their score will become a universally accepted metric — similar to DxOMark that is used for evaluating camera performance — and all algorithms included in the benchmark are open source. The current ranking of different smartphones and mobile processors is available on the project’s webpage.

The benchmark system and app was around three months in development, says AI researcher and developer Andrey Ignatov.

He explains that the score being displayed reflects two main aspects: The SoC’s speed and available RAM.

“Let’s consider two devices: one with a score of 6000 and one with a score of 200. If some AI algorithm will run on the first device for 5 seconds, then this means that on the second device this will take about 30 times longer, i.e. almost 2.5 minutes. And if we are thinking about applications like face recognition this is not just about the speed, but about the applicability of the approach: Nobody will wait 10 seconds till their phone will be trying to recognize them.

“The same is about memory: The larger is the network/input image — the more RAM is needed to process it. If the phone has a small amount of RAM that is e.g. only enough to enhance 0.3MP photo, then this enhancement will be clearly useless, but if it can do the same job for Full HD images — this opens up much wider possibilities. So, basically the higher score — the more complex algorithms can be used / larger images can be processed / it will take less time to do this.”

Discussing the idea for the benchmark, Ignatov says the lab is “tightly bound” to both research and industry — so “at some point we became curious about what are the limitations of running the recent AI algorithms on smartphones”.

“Since there was no information about this (currently, all AI algorithms are running remotely on the servers, not on your device, except for some built-in apps integrated in phone’s firmware), we decided to develop our own tool that will clearly show the performance and capabilities of each device,” he adds. 

“We can say that we are quite satisfied with the obtained results — despite all current problems, the industry is clearly moving towards using AI on smartphones, and we also hope that our efforts will help to accelerate this movement and give some useful information for other members participating in this development.”

After building the benchmarking system and collating scores on a bunch of Android devices, Ignatov sums up the current situation of AI on smartphones as “both interesting and absurd”.

For example, the team found that devices running Qualcomm chips weren’t the clear winners they’d imagined — i.e. based on the company’s promotional materials about Snapdragon’s 845 AI capabilities and 8x performance acceleration.

“It turned out that this acceleration is available only for ‘quantized’ networks that currently cannot be deployed on the phones, thus for ‘normal’ networks you won’t get any acceleration at all,” he says. “The saddest thing is that actually they can theoretically provide acceleration for the latter networks too, but they just haven’t implemented the appropriated drivers yet, and the only possible way to get this acceleration now is to use Snapdragon’s proprietary SDK available for their own processors only. As a result — if you are developing an app that is using AI, you won’t get any acceleration on Snapdragon’s SoCs, unless you are developing it for their processors only.”

Whereas the researchers found that Huawei’s Kirin’s 970 CPU — which is technically even slower than Snapdragon 636 — offered a surprisingly strong performance.

“Their integrated NPU gives almost 10x acceleration for Neural Networks, and thus even the most powerful phone CPUs and GPUs can’t compete with it,” says Ignatov. “Additionally, Huawei P20/P20 Pro are the only smartphones on the market running Android 8.1 that are currently providing AI acceleration, all other phones will get this support only in Android 9 or later.”

It’s not all great news for Huawei phone owners, though, as Ignatov says the NPU doesn’t provide acceleration for ‘quantized’ networks (though he notes the company has promised to add this support by the end of this year); and also it uses its own RAM — which is “quite limited” in size, and therefore you “can’t process large images with it”…

“We would say that if they solve these two issues — most likely nobody will be able to compete with them within the following year(s),” he suggests, though he also emphasizes that this assessment only refers to the one SoC, noting that Huawei’s processors don’t have the NPU module.

For Samsung processors, the researchers flag up that all the company’s devices are still running Android 8.0 but AI acceleration is only available starting from Android 8.1 and above. Natch.

They also found CPU performance could “vary quite significantly” — up to 50% on the same Samsung device — because of throttling and power optimization logic. Which would then have a knock on impact on AI performance.

For Mediatek, the researchers found the chipmaker is providing acceleration for both ‘quantized’ and ‘normal’ networks — which means it can reach the performance of “top CPUs”.

But, on the flip side, Ignatov calls out the company’s slogan — that it’s “Leading the Edge-AI Technology Revolution” — dubbing it “nothing more than their dream”, and adding: “Even the aforementioned Samsung’s latest Exynos CPU can slightly outperform it without using any acceleration at all, not to mention Huawei with its Kirin’s 970 NPU.”

“In summary: Snapdragon — can theoretically provide good results, but are lacking the drivers; Huawei — quite outstanding results now and most probably in the nearest future; Samsung — no acceleration support now (most likely this will change soon since they are now developing their own AI Chip), but powerful CPUs; Mediatek — good results for mid-range devices, but definitely no breakthrough.”

It’s also worth noting that some of the results were obtained on prototype samples, rather than shipped smartphones, so haven’t yet been included in the benchmark table on the team’s website.

“We will wait till the devices with final firmware will come to the market since some changes might still be introduced,” he adds.

For more on the pros and cons of AI-powered smartphone features check out our article from earlier this year.

Powered by WPeMatico

Klang gets $8.95M for an MMO sim sitting atop Improbable’s dev platform

Posted by | artificial intelligence, Europe, Fundings & Exits, Gaming, improbable, MMO, Northzone, virtual world | No Comments

Berlin-based games studio Klang, which is building a massive multiplayer online simulation called Seed utilizing Improbable’s virtual world builder platform, has just bagged $8.95M in Series A funding to support development of the forthcoming title.

The funding is led by veteran European VC firm Northzone. It follows a seed raise for Seed, finalized in March 2018, and led by Makers Fund, with participation by firstminute capital, Neoteny, Mosaic Ventures, and Novator — bringing the total funding raised for the project to $13.95M.

The studio was founded in 2013, and originally based in Reykjavík, Iceland, before relocating to Berlin. Klang’s original backers include Greylock Partners, Joi Ito, and David Helgason, as well as original investors London Venture Partners.

The latest tranche of funding will be used to expand its dev team and for continued production on Seed which is in pre-alpha at this stage — with no release date announced yet.

Nor is there a confirmed pricing model. We understand the team is looking at a variety of ideas at this stage, such as tying the pricing to the costs of simulating the entities.

They have released the below teaser showing the pre-alpha build of the game — which is described as a persistent simulation where players are tasked with colonizing an alien planet, managing multiple characters in real-time and interacting with characters managed by other human players they encounter in the game space.

The persistent element refers to the game engine maintaining character activity after the player has logged off — supporting an unbroken simulation.

Klang touts its founders’ three decades of combined experience working on MMOs EVE Online and Dust 514, and now being rolled into designing and developing the large, player-driven world they’re building with Seed.

Meanwhile London-based Improbable bagged a whopping $502M for its virtual world builder SpatialOS just over a year ago. The dev platform lets developers design and build massively detailed environments — to offer what it bills as a new form of simulation on a massive scale — doing this by utilizing distributed cloud computing infrastructure and machine learning technology to run a swarm of hundreds of game engines so it can support a more expansive virtual world vs software running off of a single engine or server.

Northzone partner Paul Murphy, who is leading the investment in Klang, told us: “It is unusual to raise for a specific title, and we are for all intents and purposes investing in Klang as a studio. We are very excited about the team and the creative potential of the studio. But our investment thesis is based on looking for something that really stands out and is wildly ambitious over and above everything else that’s out there. That is how we feel about the potential of Seed as a simulation.”

Powered by WPeMatico

Digging deeper into smart speakers reveals two clear paths

Posted by | Alexa, Amazon, amazon alexa, Amazon Echo, Apple, artificial intelligence, Assistant, Ben Einstein, computing, echo, Gadgets, Google, Google Assistant, ring, smart speaker, smart speakers, Sonos, sonos one, Speaker, Spotify, steel, TC, technology | No Comments

In a truly fascinating exploration into two smart speakers – the Sonos One and the Amazon Echo – BoltVC’s Ben Einstein has found some interesting differences in the way a traditional speaker company and an infrastructure juggernaut look at their flagship devices.

The post is well worth a full read but the gist is this: Sonos, a very traditional speaker company, has produced a good speaker and modified its current hardware to support smart home features like Alexa and Google Assistant. The Sonos One, notes Einstein, is a speaker first and smart hardware second.

“Digging a bit deeper, we see traditional design and manufacturing processes for pretty much everything. As an example, the speaker grill is a flat sheet of steel that’s stamped, rolled into a rounded square, welded, seams ground smooth, and then powder coated black. While the part does look nice, there’s no innovation going on here,” he writes.

The Amazon Echo, on the other hand, looks like what would happen if an engineer was given an unlimited budget and told to build something that people could talk to. The design decisions are odd and intriguing and it is ultimately less a speaker than a home conversation machine. Plus it is very expensive to make.

Pulling off the sleek speaker grille, there’s a shocking secret here: this is an extruded plastic tube with a secondary rotational drilling operation. In my many years of tearing apart consumer electronics products, I’ve never seen a high-volume plastic part with this kind of process. After some quick math on the production timelines, my guess is there’s a multi-headed drill and a rotational axis to create all those holes. CNC drilling each hole individually would take an extremely long time. If anyone has more insight into how a part like this is made, I’d love to see it! Bottom line: this is another surprisingly expensive part.

Sonos, which has been making a form of smart speaker for 15 years, is a CE company with cachet. Amazon, on the other hand, sees its devices as a way into living rooms and a delivery system for sales and is fine with licensing its tech before making its own. Therefore to compare the two is a bit disingenuous. Einstein’s thesis that Sonos’ trajectory is troubled by the fact that it depends on linear and closed manufacturing techniques while Amazon spares no expense to make its products is true. But Sonos makes speakers that work together amazingly well. They’ve done this for a decade and a half. If you compare their products – and I have – with competing smart speakers an non-audiophile “dumb” speakers you will find their UI, UX, and sound quality surpass most comers.

Amazon makes things to communicate with Amazon. This is a big difference.

Where Einstein is correct, however, is in his belief that Sonos is at a definite disadvantage. Sonos chases smart technology while Amazon and Google (and Apple, if their HomePod is any indication) lead. That said, there is some value to having a fully-connected set of speakers with add-on smart features vs. having to build an entire ecosystem of speaker products that can take on every aspect of the home theatre.

On the flip side Amazon, Apple, and Google are chasing audio quality while Sonos leads. While we can say that in the future we’ll all be fine with tinny round speakers bleating out Spotify in various corners of our room, there is something to be said for a good set of woofers. Whether this nostalgic love of good sound survives this generation’s tendency to watch and listen to low resolution media is anyone’s bet, but that’s Amazon’s bet to lose.

Ultimately Sonos is strong and fascinating company. An upstart that survived the great CE destruction wrought by Kickstarter and Amazon, it produces some of the best mid-range speakers I’ve used. Amazon makes a nice – almost alien – product, but given that it can be easily copied and stuffed into a hockey puck that probably costs less than the entire bill of materials for the Amazon Echo it’s clear that Amazon’s goal isn’t to make speakers.

Whether the coming Sonos IPO will be successful depends partially on Amazon and Google playing ball with the speaker maker. The rest depends on the quality of product and the dedication of Sonos users. This good will isn’t as valuable as a signed contract with major infrastructure players but Sonos’ good will is far more than Amazon and Google have with their popular but potentially intrusive product lines. Sonos lives in the home while Google and Amazon want to invade it. That is where Sonos wins.

Powered by WPeMatico

Apple’s Shortcuts will flip the switch on Siri’s potential

Posted by | Amazon, Android, app-store, Apple, apple inc, apple tv, Apple Watch, Apps, artificial intelligence, Assistant, Column, computing, Craig Federighi, Google, google now, iOS, iPad, iPhone, iTunes, mobile devices, operating system, screen time, siri, Software, virtual assistant | No Comments
Matthew Cassinelli
Contributor

Matthew Cassinelli is a former member of the Workflow team and works as an independent writer and consultant. He previously worked as a data analyst for VaynerMedia.

At WWDC, Apple pitched Shortcuts as a way to ”take advantage of the power of apps” and ”expose quick actions to Siri.” These will be suggested by the OS, can be given unique voice commands, and will even be customizable with a dedicated Shortcuts app.

But since this new feature won’t let Siri interpret everything, many have been lamenting that Siri didn’t get much better — and is still lacking compared to Google Assistant or Amazon Echo.

But to ignore Shortcuts would be missing out on the bigger picture. Apple’s strengths have always been the device ecosystem and the apps that run on them.

With Shortcuts, both play a major role in how Siri will prove to be a truly useful assistant and not just a digital voice to talk to.

Your Apple devices just got better

For many, voice assistants are a nice-to-have, but not a need-to-have.

It’s undeniably convenient to get facts by speaking to the air, turning on the lights without lifting a finger, or triggering a timer or text message – but so far, studies have shown people don’t use much more than these on a regular basis.

People don’t often do more than that because the assistants aren’t really ready for complex tasks yet, and when your assistant is limited to tasks inside your home or commands spoken inton your phone, the drawbacks prevent you from going deep.

If you prefer Alexa, you get more devices, better reliability, and a breadth of skills, but there’s not a great phone or tablet experience you can use alongside your Echo. If you prefer to have Google’s Assistant everywhere, you must be all in on the Android and Home ecosystem to get the full experience too.

Plus, with either option, there are privacy concerns baked into how both work on a fundamental level – over the web.

In Apple’s ecosystem, you have Siri on iPhone, iPad, Apple Watch, AirPods, HomePod, CarPlay, and any Mac. Add in Shortcuts on each of those devices (except Mac, but they still have Automator) and suddenly you have a plethora of places to execute these all your commands entirely by voice.

Each accessory that Apple users own will get upgraded, giving Siri new ways to fulfill the 10 billion and counting requests people make each month (according to Craig Federighi’s statement on-stage at WWDC).

But even more important than all the places where you can use your assistant is how – with Shortcuts, Siri gets even better with each new app that people download. There’s the other key difference: the App Store.

Actions are the most important part of your apps

iOS has always had a vibrant community of developers who create powerful, top-notch applications that push the system to its limits and take advantage of the ever-increasing power these mobile devices have.

Shortcuts opens up those capabilities to Siri – every action you take in an app can be shared out with Siri, letting people interact right there inline or using only their voice, with the app running everything smoothly in the background.

Plus, the functional approach that Apple is taking with Siri creates new opportunities for developers provide utility to people instead of requiring their attention. The suggestions feature of Shortcuts rewards “acceleration”, showing the apps that provide the most time savings and use for the user more often.

This opens the door to more specialized types of apps that don’t necessarily have to grow a huge audience and serve them ads – if you can make something that helps people, Shortcuts can help them use your app more than ever before (and without as much effort). Developers can make a great experience for when people visit the app, but also focus on actually doing something useful too.

This isn’t a virtual assistant that lives in the cloud, but a digital helper that can pair up with the apps uniquely taking advantage of Apple’s hardware and software capabilities to truly improve your use of the device.

In the most groan-inducing way possible, “there’s an app for that” is back and more important than ever. Not only are apps the centerpiece of the Siri experience, but it’s their capabilities that extend Siri’s – the better the apps you have, the better Siri can be.

Control is at your fingertips

Importantly, Siri gets all of this Shortcuts power while keeping the control in each person’s hands.

All of the information provided to the system is securely passed along by individual apps – if something doesn’t look right, you can just delete the corresponding app and the information is gone.

Siri will make recommendations based on activities deemed relevant by the apps themselves as well, so over-active suggestions shouldn’t be common (unless you’re way too active in some apps, in which case they added Screen Time for you too).

Each of the voice commands is custom per user as well, so people can ignore their apps suggestions and set up the phrases to their own liking. This means nothing is already “taken” because somebody signed up for the skill first (unless you’ve already used it yourself, of course).

Also, Shortcuts don’t require the web to work – the voice triggers might not work, but the suggestions and Shortcuts app give you a place to use your assistant voicelessly. And importantly, Shortcuts can use the full power of the web when they need to.

This user-centric approach paired with the technical aspects of how Shortcuts works gives Apple’s assistant a leg up for any consumers who find privacy important. Essentially, Apple devices are only listening for “Hey Siri”, then the available Siri domains + your own custom trigger phrases.

Without exposing your information to the world or teaching a robot to understand everything, Apple gave Siri a slew of capabilities that in many ways can’t be matched. With Shortcuts, it’s the apps, the operating system, and the variety of hardware that will make Siri uniquely qualified come this fall.

Plus, the Shortcuts app will provide a deeper experience for those who want to chain together actions and customize their own shortcuts.

There’s lots more under the hood to experiment with, but this will allow anyone to tweak & prod their Siri commands until they have a small army of custom assistant tasks at the ready.

Hey Siri, let’s get started

Siri doesn’t know all, Can’t perform any task you bestow upon it, and won’t make somewhat uncanny phone calls on your behalf.

But instead of spending time conversing with a somewhat faked “artificial intelligence”, Shortcuts will help people use Siri as an actual digital assistant – a computer to help them get things done better than they might’ve otherwise.

With Siri’s new skills extendeding to each of your Apple products (except for Apple TV and the Mac, but maybe one day?), every new device you get and every new app you download can reveal another way to take advantage of what this technology can offer.

This broadening of Siri may take some time to get used to – it will be about finding the right place for it in your life.

As you go about your apps, you’ll start seeing and using suggestions. You’ll set up a few voice commands, then you’ll do something like kick off a truly useful shortcut from your Apple Watch without your phone connected and you’ll realize the potential.

This is a real digital assistant, your apps know how to work with it, and it’s already on many of your Apple devices. Now, it’s time to actually make use of it.

Powered by WPeMatico