Emerging-Technologies

Waymo has now driven 10 billion autonomous miles in simulation

Posted by | automotive, california, Companies, CTO, Dmitri Dolgov, electric vehicles, Emerging-Technologies, Google, Mobile, san francisco bay area, self-driving cars, simulation, TC, TC Sessions: Mobility 2019, waymo, X | No Comments

Alphabet’s Waymo autonomous driving company announced a new milestone at TechCrunch Sessions: Mobility on Wednesday: 10 billion miles driving in simulation. This is a significant achievement for the company, because all those simulated miles on the road for its self-driving software add up to considerable training experience.

Waymo also probably has the most experience when it comes to actual, physical road miles driven — the company is always quick to point out that it’s been doing this far longer than just about anyone else working in autonomous driving, thanks to its head start as Google’s self-driving car moonshot project.

“At Waymo, we’ve driven more than 10 million miles in the real world, and over 10 billion miles in simulation,” Waymo CTO Dmitri Dolgov told TechCrunch’s Kirsten Korosec on the Sessions: Mobility stage. “And the amount of driving you do in both of those is really a function of the maturity of your system, and the capability of your system. If you’re just getting started, it doesn’t matter – you’re working on the basics, you can drive a few miles or a few thousand or tens of thousands of miles in the real world, and that’s plenty to tell you and give you information that you need to know to improve your system.”

Dolgov’s point is that the more advanced your autonomous driving system becomes, the more miles you actually need to drive to have impact, because you’ve handled the basics and are moving on to edge cases, advanced navigation and ensuring that the software works in any and every scenario it encounters. Plus, your simulation becomes more sophisticated and more accurate as you accumulate real-world driving miles, which means the results of your virtual testing is more reliable for use back in your cars driving on actual roads.

This is what leads Dolgov to the conclusion that Waymo’s simulation is likely better than a lot of comparable simulation training at other autonomous driving companies.

“I think what makes it a good simulator, and what makes it powerful is two things,” Dolgov said onstage. “One [is] fidelity. And by fidelity, I mean, not how good it looks. It’s how well it behaves, and how representative it is of what you will encounter in the real world. And then second is scale.”

In other words, experience isn’t beneficial in terms of volume — it’s about sophistication, maturity and readiness for commercial deployment.

Powered by WPeMatico

Is your product’s AI annoying people?

Posted by | Android, artificial intelligence, brand management, cars, Column, customer experience, Emerging-Technologies, Google, Google Duplex, personal assistant, Tesla, Tesla Autopilot, Tesla Model S | No Comments
James Glasnapp
Contributor

James Glasnapp is a senior UX researcher at PARC.

Artificial intelligence is allowing us all to consider surprising new ways to simplify the lives of our customers. As a product developer, your central focus is always on the customer. But new problems can arise when the specific solution under development helps one customer while alienating others.

We tend to think of AI as an incredible dream assistant to our lives and business operations, when that’s not always the case. Designers of new AI services should consider in what ways and for whom might these services be annoying, burdensome or problematic, and whether it involves the direct customer or others who are intertwined with the customer. When we apply AI services to make tasks easier for our customers that end up making things more difficult for others, that outcome can ultimately cause real harm to our brand perception.

Let’s consider one personal example taken from my own use of Amy.ai, a service (from x.ai) that provides AI assistants named Amy and Andrew Ingram. Amy and Andrew are AI assistants that help schedule meetings for up to four people. This service solves the very relatable problem of scheduling meetings over email, at least for the person who is trying to do the scheduling.

After all, who doesn’t want a personal assistant to whom you can simply say, “Amy, please find the time next week to meet with Tom, Mary, Anushya and Shiveesh.” In this way, you don’t have to arrange a meeting room, send the email, and go back and forth managing everyone’s replies. My own experience showed that while it was easier for me to use Amy to find a good time to meet with my four colleagues, it soon became a headache for those other four people. They resented me for it after being bombarded by countless emails trying to find some mutually agreeable time and place for everyone involved.

Automotive designers are another group that’s incorporating all kinds of new AI systems to enhance the driving experience. For instance, Tesla recently updated its autopilot software to allow a car to change lanes automatically when it sees fit, presumably when the system interprets that the next lane’s traffic is going faster.

In concept, this idea seems advantageous to the driver who can make a safe entrance into faster traffic, while relieving any cognitive burden of having to change lanes manually. Furthermore, by allowing the Tesla system to change lanes, it takes away the desire to play Speed Racer or edge toward competitiveness that one may feel on the highway.

However, for the drivers in other lanes who are forced to react to the Tesla autopilot, they may be annoyed if the Tesla jerks, slows down or behaves outside the normal realm of what people expect on the freeway. Moreover, if they are driving very fast and the autopilot did not recognize they were operating at a high rate of speed when the car decided to make the lane change, then that other driver can get annoyed. We can all relate to driving 75 mph in the fast lane, only to have someone suddenly pull in front of us at 70 as if they were clueless that the lane was moving at 75.

For two-lane traffic highways that are not busy, the Tesla software might work reasonably well. However, in my experience of driving around the congested freeways of the Bay Area, the system performed horribly whenever I changed crowded lanes, and I knew that it was angering other drivers most of the time. Even without knowing those irate drivers personally, I care enough about driving etiquette to politely change lanes without getting the finger from them for doing so.

Post Intelligence robot

Another example from the internet world involves Google Duplex, a clever feature for Android phone users that allows AI to make restaurant reservations. From the consumer point of view, having an automated system to make a dinner reservation on one’s behalf sounds excellent. It is advantageous to the person making the reservation because, theoretically, it will save the burden of calling when the restaurant is open and the hassle of dealing with busy signals and callbacks.

However, this tool is also potentially problematic for the restaurant worker who answers the phone. Even though the system may introduce itself as artificial, the burden shifts to the restaurant employee to adapt and master a new and more limited interaction to achieve the same goal — making a simple reservation.

On the one hand, Duplex is bringing customers to the restaurant, but on the other hand, the system is narrowing the scope of interaction between the restaurant and its customer. The restaurant may have other tables on different days, or it may be able to squeeze you in if you leave early, but the system might not handle exceptions like this. Even the idea of an AI bot bothering the host who answers the phone doesn’t seem quite right.

As you think about making the lives of your customers easier, consider how the assistance you are dreaming about might be more of a nightmare for everyone else associated with your primary customer. If there is a question regarding the negative experience of anyone related to your AI product, explore that experience further to determine if there is another better way to still delight them without angering their neighbors.

From a user-experience perspective, developing a customer journey map can be a helpful way to explore the actions, thoughts and emotional experiences of your primary customer or “buyer persona.” Identify the touchpoints in which your system interacts with innocent bystanders who are not your direct customers. For those people unaware of your product, explore their interaction with your buyer persona, specifically their emotional experience.

An aspirational goal should be to delight this adjacent group of people enough that they would move toward being prospects and, eventually, becoming your customers as well. Also, you can use participant ethnography to analyze the innocent bystander in relation to your product. This is a research method that combines the observations of people as they interact with processes and the product.

A guiding design inspiration for this research could be, “How can our AI system behave in such a way that everyone who might come into contact with our product is enchanted and wants to know more?”

That’s just human intelligence, and it’s not artificial.

Powered by WPeMatico

Europe publishes common drone rules, giving operators a year to prepare

Posted by | drone, drone regulations, Emerging-Technologies, eu, Europe, european union, Gadgets, Gatwick Airport, robotics, Transportation, unmanned aerial vehicles | No Comments

Europe has today published common rules for the use of drones. The European Union Aviation Safety Agency (EASA) says the regulations, which will apply universally across the region, are intended to help drone operators of all stripes have a clear understanding of what is and is not allowed.

Having a common set of rules will also means drones can be operated across European borders without worrying about differences in regulations.

“Once drone operators have received an authorisation in the state of registration, they are allowed to freely circulate in the European Union. This means that they can operate their drones seamlessly when travelling across the EU or when developing a business involving drones around Europe,” writes EASA in a blog post.

Although published today and due to come into force within 20 days, the common rules won’t yet apply — with Member States getting another year, until June 2020, to prepare to implement the requirements.

Key among them is that starting from June 2020 the majority of drone operators will need to register themselves before using a drone, either where they reside or have their main place of business.

Some additional requirements have later deadlines as countries gradually switch over to the new regime.

The pan-EU framework creates three categories of operation for drones — open’ (for low-risk craft of up to 25kg), ‘specific’ (where drones will require authorization to be flown) or ‘certified’ (the highest risk category, such as operating delivery or passenger drones, or flying over large bodies of people) — each with their own set of regulations.

The rules also include privacy provisions, such as a requirement that owners of drones with sensors that could capture personal data should be registered to operate the craft (with an exception for toy drones).

The common rules will replace national regulations that may have already been implemented by individual EU countries. Although member states will retain the ability to set their own no-fly zones — such as covering sensitive installations/facilities and/or gatherings of people, with the regulation setting out the “possibility for Member States to lay down national rules to make subject to certain conditions the operations of unmanned aircraft for reasons falling outside the scope of this Regulation, including environmental protection, public security or protection of privacy and personal data in accordance with the Union law”.

The harmonization of drone rules is likely to be welcomed by operators in Europe who currently face having to do a lot of due diligence ahead of deciding whether or not to pack a drone in their suitcase before heading to another EU country.

EASA also suggests the common rules will reduce the likelihood of another major disruption — such as the unidentified drone sightings that ground flights at Gatwick Airport just before Christmas which stranded thousands of travellers — given the registration requirement, and a stipulation that new drones must be individually identifiable to make it easier to trace their owner.

“The new rules include technical as well as operational requirements for drones,” it writes. “On one hand they define the capabilities a drone must have to be flown safely. For instance, new drones will have to be individually identifiable, allowing the authorities to trace a particular drone if necessary. This will help to better prevent events similar to the ones which happened in 2018 at Gatwick and Heathrow airports. On the other hand the rules cover each operation type, from those not requiring prior authorisation, to those involving certified aircraft and operators, as well as minimum remote pilot training requirements.

“Europe will be the first region in the world to have a comprehensive set of rules ensuring safe, secure and sustainable operations of drones both, for commercial and leisure activities. Common rules will help foster investment, innovation and growth in this promising sector,” adds Patrick Ky, EASA’s executive director, in a statement.

Powered by WPeMatico

Alibaba will let you find restaurants and order food with voice in a car

Posted by | alibaba, alibaba group, alipay, Android, Asia, automotive, AutoNavi, Baidu, Beijing, China, Emerging-Technologies, in-car apps, online marketplaces, operating system, operating systems, order food, shanghai, taobao, Tencent, Transportation | No Comments

Competition in the Chinese internet has for years been about who controls your mobile apps. These days, giants are increasingly turning to offline scenarios, including what’s going on behind the dashboard in your car.

On Tuesday, Alibaba announced at the annual Shanghai Auto Show that it’s developing apps for connected cars that will let drivers find restaurants, queue up and make reservations at restaurants, order food and eventually complete a plethora of other tasks using voice, motion or touch control. Third-party developers are invited to make their in-car apps, which will run on Alibaba’s operating system AliOS.

Rather than working as standalone apps, these in-car services come in the form of “mini apps,” which are smaller than regular ones in exchange for faster access and smaller file sizes, in Alibaba’s all-in-one digital wallet Alipay . Alibaba has other so-called “super apps” in its ecosystem, such as marketplace Taobao and navigation service AutoNavi, but the payments solution clearly makes more economic sense if Alibaba wants people to spend more while sitting in a four-wheeler.

There’s no timeline for when Alibaba will officially roll out in-car mini apps, but it’s already planning for a launch, a company spokesperson told TechCrunch.

Making lite apps has been a popular strategy for China’s internet giants operating super apps that host outside apps, or “mini-apps”; that way users rarely need to leave their ecosystems. These lite apps are known to be easier and cheaper to build than a native app, although developers have to make concessions, like giving their hosts a certain level of access to user data and obeying rules as they would with Apple’s App Store. For in-car services, Alibaba says there will be “specific review criteria for safety and control” tailored to the auto industry.

alios cars alibaba

Photo source: Alibaba

Alibaba’s move is indicative of a heightened competition to control the operating system in next-gen connected cars. For those who wonder whether the e-commerce behemoth will make its own cars given it has aggressively infiltrated the physical space, like opening its own supermarket chain Hema, the company’s solution to vehicles appears to be on the software front, at least for now.

In 2017, Alibaba rebranded its operating system with a deep focus to put AliOS into car partners. To achieve this goal, Alibaba also set up a joint venture called Banma Network with state-owned automaker SAIC Motor and Dongfeng Peugeot Citroen, which is the French car company’s China venture, that would hawk and integrate AliOS-powered solutions with car clients. As of last August, 700,000 AliOS-powered SAIC vehicles had been sold.

Alibaba competitors Tencent and Baidu have also driven into the auto field, although through slightly different routes. Baidu began by betting on autonomous driving and built an Android-like developer platform for car manufacturers. While the futuristic plan is far from bearing significant commercial fruit, it has gained a strong foothold in self-driving with the most mileage driven in Beijing, a pivotal hub to test autonomous cars. Tencent’s car initiatives seem more nebulous. Like Baidu, it’s testing self-driving and like Alibaba, it’s partnered with industry veterans to make cars, but it’s unclear where the advantage lies for the social media and gaming giant in the auto space.

Powered by WPeMatico

Talk all things robotics and AI with TechCrunch writers

Posted by | articles, artificial intelligence, Automation, conference calls, deep learning, Emerging-Technologies, events, Extra Crunch Conference Call, Extra Crunch members, Gadgets, hardware, robotics, science, Startups, TC, tc sessions, TC Sessions: Robotics + AI 2019, technology, uc-berkeley | No Comments

This Thursday, we’ll be hosting our third annual Robotics + AI TechCrunch Sessions event at UC Berkeley’s Zellerbach Hall. The day is packed start-to-finish with intimate discussions on the state of robotics and deep learning with key founders, investors, researchers and technologists.

The event will dig into recent developments in robotics and AI, which startups and companies are driving the market’s growth and how the evolution of these technologies may ultimately play out. In preparation for our event, TechCrunch’s Brian Heater spent time over the last several months visiting some of the top robotics companies in the country. Brian will be on the ground at the event, alongside Lucas Matney, who will also be on the scene. Friday at 11:00 am PT, Brian and Lucas will be sharing with Extra Crunch members (on a conference call) what they saw and what excited them most.

Tune in to find out about what you might have missed and to ask Brian and Lucas anything else robotics, AI or hardware. And want to attend the event in Berkeley this week? It’s not too late to get tickets.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

Powered by WPeMatico

Europe is prepared to rule over 5G cybersecurity

Posted by | 5g, artificial intelligence, Australia, barcelona, broadband, China, computer security, EC, Emerging-Technologies, Europe, european commission, european union, Germany, huawei, Internet of Things, Mariya Gabriel, Mobile, mwc 2019, network technology, New Zealand, Security, telecommunications, trump, UK government, United Kingdom, United States, zte | No Comments

The European Commission’s digital commissioner has warned the mobile industry to expect it to act over security concerns attached to Chinese network equipment makers.

The Commission is considering a defacto ban on kit made by Chinese companies including Huawei in the face of security and espionage concerns, per Reuters.

Appearing on stage at the Mobile World Congress tradeshow in Barcelona today, Mariya Gabriel, European commissioner for digital economy and society, flagged network “cybersecurity” during her scheduled keynote, warning delegates it’s stating the obvious for her to say that “when 5G services become mission critical 5G networks need to be secure”.

Geopolitical concerns between the West and China are being accelerated and pushed to the fore as the era of 5G network upgrades approach, as well as by ongoing tensions between the U.S. and China over trade.

“I’m well away of the unrest among all of you key actors in the telecoms sectors caused by the ongoing discussions around the cybersecurity of 5G,” Gabriel continued, fleshing out the Commission’s current thinking. “Let me reassure you: The Commission takes your view very seriously. Because you need to run these systems everyday. Nobody is helped by premature decisions based on partial analysis of the facts.

“However it is also clear that Europe has to have a common approach to this challenge. And we need to bring it on the table soon. Otherwise there is a risk that fragmentation rises because of diverging decisions taken by Member States trying to protect themselves.”

“We all know that this fragmentation damages the digital single market. So therefore we are working on this important matter with priority. And to the Commission we will take steps soon,” she added.

The theme of this year’s show is “intelligent connectivity”; the notion that the incoming 5G networks will not only create links between people and (many, many more) things but understand the connections they’re making at a greater depth and resolution than has been possible before, leveraging the big data generated by many more connections to power automated decision-making in near real time, with low latency another touted 5G benefit (as well as many more connections per cell).

Futuristic scenarios being floated include connected cars neatly pulling to the sides of the road ahead of an ambulance rushing a patient to hospital — or indeed medical operations being aided and even directed remotely in real-time via 5G networks supporting high resolution real-time video streaming.

But for every touted benefit there are easy to envisage risks to network technology that’s being designed to connect everything all of the time — thereby creating a new and more powerful layer of critical infrastructure society will be relying upon.

Last fall the Australia government issued new security guidelines for 5G networks that essential block Chinese companies such as Huawei and ZTE from providing equipment to operators — justifying the move by saying that differences in the way 5G operates compared to previous network generations introduces new risks to national security.

New Zealand followed suit shortly after, saying kit from the Chinese companies posed a significant risk to national security.

While in the U.S. President Trump has made 5G network security a national security priority since 2017, and a bill was passed last fall banning Chinese companies from supplying certain components and services to government agencies.

The ban is due to take effect over two years but lawmakers have been pressuring to local carriers to drop 5G collaborations with companies such as Huawei.

In Europe the picture is so far more mixed. A UK government report last summer investigating Huawei’s broadband and mobile infrastructure raised further doubts, and last month Germany was reported to be mulling a 5G ban on the Chinese kit maker.

But more recently the two EU Member States have been reported to no longer be leaning towards a total ban — apparently believing any risk can be managed and mitigated by oversight and/or partial restrictions.

It remains to be seen how the Commission could step in to try to harmonize security actions taken by Member States around nascent 5G networks. But it appears prepared to set rules.

That said, Gabriel gave no hint of its thinking today, beyond repeating the Commission’s preferred position of less fragmentation, more harmonization to avoid collateral damage to its overarching Digital Single Market initiative — i.e. if Member States start fragmenting into a patchwork based on varying security concerns.

We’ve reached out to the Commission for further comment and will update this story with any additional context.

During the keynote she was careful to talk up the transformative potential of 5G connectivity while also saying innovation must work in lock-step with European “values”.

“Europe has to keep pace with other regions and early movers while making sure that its citizens and businesses benefit swiftly from the new infrastructures and the many applications that will be built on top of them,” she said.

“Digital is helping us and we need to reap its opportunities, mitigate its risks and make sure it is respectful of our values as much as driven by innovation. Innovation and values. Two key words. That is the vision we have delivered in terms of the defence for our citizens in Europe. Together we have decided to construct a Digital Single Market that reflects the values and principles upon which the European Union has been built.”

Her speech also focused on AI, with the commissioner highlighting various EC initiatives to invest in and support private sector investment in artificial intelligence — saying it’s targeting €20BN in “AI-directed investment” across the private and public sector by 2020, with the goal for the next decade being “to reach the same amount as an annual average” — and calling on the private sector to “contribute to ensure that Europe reaches the level of investment needed for it to become a world stage leader also in AI”.

But again she stressed the need for technology developments to be thoughtfully managed so they reflect the underlying society rather than negatively disrupting it. The goal should be what she dubbed “human-centric AI”.

“When we talk about AI and new technologies development for us Europeans it is not only about investing. It is mainly about shaping AI in a way that reflects our European values and principles. An ethical approach to AI is key to enable competitiveness — it will generate user trust and help facilitate its uptake,” she said.

“Trust is the key word. There is no other way. It is only by ensuring trustworthiness that Europe will position itself as a leader in cutting edge, secure and ethical AI. And that European citizens will enjoy AI’s benefits.”

Powered by WPeMatico

Drones ground flights at UK’s second largest airport

Posted by | Civil Aviation Authority, drone, drones, Emerging-Technologies, Europe, Gadgets, Gatwick Airport, gchq, hardware, robotics, TC, United Kingdom | No Comments

Mystery drone operator/s have grounded flights at the U.K.’s second largest airport, disrupting the travel plans of hundreds of thousands of people hoping to get away over the festive period.

The BBC reports that Gatwick Airport’s runway has been shut since Wednesday night on safety grounds, after drones were spotted being flown repeatedly over the airfield.

It says airlines have been advised to cancel all flights up to at least 16:00 GMT, with the airport saying the runway would not open “until it was safe to do so.”

More than 20 police units are reported to be searching for the drone operator/s.

The U.K. made amendments to existing legislation this year to make illegal flying a drone within 1km of an airport after a planned drone bill got delayed.

The safety focused tweak to the law five months ago also restricted drone flight height to 400 ft. A registration scheme for drone owners is also set to be introduced next year.

Under current U.K. law, a drone operator who is charged with recklessly or negligently acting in a manner likely to endanger an aircraft or a person in an aircraft can face a penalty of up to five years in prison or an unlimited fine, or both.

Although, in the Gatwick incident case, it’s not clear whether simply flying a drone near a runway would constitute an attempt to endanger an aircraft under the law. Even though the incident has clearly caused major disruption to travelers as the safety-conscious airport takes no chances.

Further adding to the misery of disrupted passengers today, the Civil Aviation Authority told the BBC it considered the event to be an “extraordinary circumstance” — meaning airlines aren’t obligated to pay financial compensation.

There’s been a marked rise in U.K. aircraft incidents involving drones over the past five years, with more than 100 recorded so far this year, according to data from the U.K. Airprox Board.

Aviation minister Baroness Sugg faced a barrage of questions about the Gatwick disruption in the House of Lords today, including accusations the government has dragged its feet on bringing in technical specifications that might have avoided the disruption.

“These drones are being operated illegally… It seems that the drones are being used intentionally to disrupt the airport, but, as I said, this is an ongoing investigation,” she told peers, adding: “We changed the law earlier this year, bringing in an exclusion zone around airports. We are working with manufactures and retailers to ensure that the new rules are communicated to those who purchase drones.

“From November next year, people will need to register their drone and take an online safety test. We have also recently consulted on extending police powers and will make an announcement on next steps shortly.”

The minister was also pressed on what the government had done to explore counterdrone technology, which could be used to disable drones, with one peer noting they’d raised the very issue two years ago.

“My Lords, technology is rapidly advancing in this area,” responded Sugg. “That is absolutely something that we are looking at. As I said, part of the consultation we did earlier this year was on counterdrone technology and we will be announcing our next steps on that very soon.”

Another peer wondered whether techniques he said had been developed by the U.K. military and spy agency GCHQ — to rapidly identify the frequency a drone is operating on, and either jam it or take control and land it — will be “given more broadly to various airports”?

“All relevant parts of the Government, including the Ministry of Defence, are working on this issue today to try to resolve it as quickly as possible,” the minister replied. “We are working on the new technology that is available to ensure that such an incident does not happen again. It is not acceptable that passengers have faced such disruption ahead of Christmas and we are doing all we can to resolve it as quickly as possible.”

Powered by WPeMatico

The Zortrax Apoller safely smooths 3D prints

Posted by | 3d printing, design, Emerging-Technologies, Gadgets, industrial design, microwave, TC, zortrax | No Comments

The Zortrax Apoller is a Smart Vapor Smoothing device that uses solvents to smooth the surface of 3D-printed objects. The resulting products look like they are injection molded and all of the little lines associated with FDM printing will disappear.

The system uses a microwave-like chamber that can hold multiple parts at once. The chamber atomizes the solvent, covering the parts, and lets the solvent do its work. Once its done it then sucks the excess vapor back into a collection chamber. The system won’t open until all of the solvent is gone, ensuring you don’t get a face full of acetone. This is an important consideration since this is sold as a desktop device and having clouds of solvent in the air at the office Christmas party could be messy.

“Vapor-smoothed models get the look of injection-molded parts with a glossy or matte finish depending on the filament used. With a dual condensation process, a 300ml bottle of solvent can be used for smoothing multiple prints instead of just one. This efficiency means that the combined weekly output of four typical FDM 3D printers can be automatically smoothed within one day without loss of quality,” the company wrote.

Given the often flimsy structural quality of FDM prints, this smoothing is more cosmetic and allows you, in theory, to create molds from 3D printed parts. In reality these glossy, acetone smoothed parts just look better and give you a better idea what the finished product — injection-molded or milled — will look like when all is said and done.

Powered by WPeMatico

Researchers discover a new way to identify 3D printed guns

Posted by | 3d printing, Buffalo, design, Emerging-Technologies, Fingerprint, Gadgets, industrial design, Makerbot, printer, printing, TC, technology | No Comments

Researchers at the University at Buffalo have found that 3D printers have fingerprints, essentially slight differences in design that can be used to identify prints. This means investigators can examine the layers of a 3D printed object and pinpoint exactly which machine produced the parts.

“3D printing has many wonderful uses, but it’s also a counterfeiter’s dream. Even more concerning, it has the potential to make firearms more readily available to people who are not allowed to possess them,” said Wenyao Xu, lead author of the study.

The researchers found that tiny wrinkles in each layer of plastic can be used to identify a “printer’s model type, filament, nozzle size and other factors cause slight imperfections in the patterns.” They call their technology PrinTracker.

“Like a fingerprint to a person, these patterns are unique and repeatable. As a result, they can be traced back to the 3D printer,” wrote the researchers.

This process works primarily with FDM printers like the Makerbot which use long spools of filament to deposit layers of plastic onto a build plate. Because the printers used in 3D printed guns are usually more complex and more expensive there could be less variation in the individual layers and, more importantly, the layers might be harder to discern. However, for some simpler plastic parts could exhibit variations.

“3D printers are built to be the same. But there are slight variations in their hardware created during the manufacturing process that lead to unique, inevitable and unchangeable patterns in every object they print,” said Xu.

Powered by WPeMatico

D-Wave offers the first public access to a quantum computer

Posted by | api, computing, D-Wave Systems, Emerging-Technologies, Gadgets, Python, quantum computing, Quantum Mechanics, Startups, TC, vancouver | No Comments

Outside the crop of construction cranes that now dot Vancouver’s bright, downtown greenways, in a suburban business park that reminds you more of dentists and tax preparers, is a small office building belonging to D-Wave. This office — squat, angular and sun-dappled one recent cool Autumn morning — is unique in that it contains an infinite collection of parallel universes.

Founded in 1999 by Geordie Rose, D-Wave worked in relative obscurity on esoteric problems associated with quantum computing. When Rose was a PhD student at the University of British Columbia, he turned in an assignment that outlined a quantum computing company. His entrepreneurship teacher at the time, Haig Farris, found the young physicists ideas compelling enough to give him $1,000 to buy a computer and a printer to type up a business plan.

The company consulted with academics until 2005, when Rose and his team decided to focus on building usable quantum computers. The result, the Orion, launched in 2007, and was used to classify drug molecules and play Sodoku. The business now sells computers for up to $10 million to clients like Google, Microsoft and Northrop Grumman.

“We’ve been focused on making quantum computing practical since day one. In 2010 we started offering remote cloud access to customers and today, we have 100 early applications running on our computers (70 percent of which were built in the cloud),” said CEO Vern Brownell. “Through this work, our customers have told us it takes more than just access to real quantum hardware to benefit from quantum computing. In order to build a true quantum ecosystem, millions of developers need the access and tools to get started with quantum.”

Now their computers are simulating weather patterns and tsunamis, optimizing hotel ad displays, solving complex network problems and, thanks to a new, open-source platform, could help you ride the quantum wave of computer programming.

Inside the box

When I went to visit D-Wave they gave us unprecedented access to the inside of one of their quantum machines. The computers, which are about the size of a garden shed, have a control unit on the front that manages the temperature as well as queuing system to translate and communicate the problems sent in by users.

Inside the machine is a tube that, when fully operational, contains a small chip super-cooled to 0.015 Kelvin, or -459.643 degrees Fahrenheit or -273.135 degrees Celsius. The entire system looks like something out of the Death Star — a cylinder of pure data that the heroes must access by walking through a little door in the side of a jet-black cube.

It’s quite thrilling to see this odd little chip inside its super-cooled home. As the computer revolution maintained its predilection toward room-temperature chips, these odd and unique machines are a connection to an alternate timeline where physics is wrestled into submission in order to do some truly remarkable things.

And now anyone — from kids to PhDs to everyone in-between — can try it.

Into the ocean

Learning to program a quantum computer takes time. Because the processor doesn’t work like a classic universal computer, you have to train the chip to perform simple functions that your own cellphone can do in seconds. However, in some cases, researchers have found the chips can outperform classic computers by 3,600 times. This trade-off — the movement from the known to the unknown — is why D-Wave exposed their product to the world.

“We built Leap to give millions of developers access to quantum computing. We built the first quantum application environment so any software developer interested in quantum computing can start writing and running applications — you don’t need deep quantum knowledge to get started. If you know Python, you can build applications on Leap,” said Brownell.

To get started on the road to quantum computing, D-Wave built the Leap platform. The Leap is an open-source toolkit for developers. When you sign up you receive one minute’s worth of quantum processing unit time which, given that most problems run in milliseconds, is more than enough to begin experimenting. A queue manager lines up your code and runs it in the order received and the answers are spit out almost instantly.

You can code on the QPU with Python or via Jupiter notebooks, and it allows you to connect to the QPU with an API token. After writing your code, you can send commands directly to the QPU and then output the results. The programs are currently pretty esoteric and require a basic knowledge of quantum programming but, it should be remembered, classic computer programming was once daunting to the average user.

I downloaded and ran most of the demonstrations without a hitch. These demonstrations — factoring programs, network generators and the like — essentially turned the concepts of classical programming into quantum questions. Instead of iterating through a list of factors, for example, the quantum computer creates a “parallel universe” of answers and then collapses each one until it finds the right answer. If this sounds odd it’s because it is. The researchers at D-Wave argue all the time about how to imagine a quantum computer’s various processes. One camp sees the physical implementation of a quantum computer to be simply a faster methodology for rendering answers. The other camp, itself aligned with Professor David Deutsch’s ideas presented in The Beginning of Infinity, sees the sheer number of possible permutations a quantum computer can traverse as evidence of parallel universes.

What does the code look like? It’s hard to read without understanding the basics, a fact that D-Wave engineers factored for in offering online documentation. For example, below is most of the factoring code for one of their demo programs, a bit of code that can be reduced to about five lines on a classical computer. However, when this function uses a quantum processor, the entire process takes milliseconds versus minutes or hours.

Classical

# Python Program to find the factors of a number

define a function

def print_factors(x):

This function takes a number and prints the factors

print(“The factors of”,x,”are:”)
for i in range(1, x + 1):
if x % i == 0:
print(i)

change this value for a different result.

num = 320

uncomment the following line to take input from the user

#num = int(input(“Enter a number: “))

print_factors(num)

Quantum

@qpu_ha
def factor(P, use_saved_embedding=True):

####################################################################################################

get circuit

####################################################################################################

construction_start_time = time.time()

validate_input(P, range(2 ** 6))

get constraint satisfaction problem

csp = dbc.factories.multiplication_circuit(3)

get binary quadratic model

bqm = dbc.stitch(csp, min_classical_gap=.1)

we know that multiplication_circuit() has created these variables

p_vars = [‘p0’, ‘p1’, ‘p2’, ‘p3’, ‘p4’, ‘p5’]

convert P from decimal to binary

fixed_variables = dict(zip(reversed(p_vars), “{:06b}”.format(P)))
fixed_variables = {var: int(x) for(var, x) in fixed_variables.items()}

fix product qubits

for var, value in fixed_variables.items():
bqm.fix_variable(var, value)

log.debug(‘bqm construction time: %s’, time.time() – construction_start_time)

####################################################################################################

run problem

####################################################################################################

sample_time = time.time()

get QPU sampler

sampler = DWaveSampler(solver_features=dict(online=True, name=’DW_2000Q.*’))
_, target_edgelist, target_adjacency = sampler.structure

if use_saved_embedding:

load a pre-calculated embedding

from factoring.embedding import embeddings
embedding = embeddings[sampler.solver.id] else:

get the embedding

embedding = minorminer.find_embedding(bqm.quadratic, target_edgelist)
if bqm and not embedding:
raise ValueError(“no embedding found”)

apply the embedding to the given problem to map it to the sampler

bqm_embedded = dimod.embed_bqm(bqm, embedding, target_adjacency, 3.0)

draw samples from the QPU

kwargs = {}
if ‘num_reads’ in sampler.parameters:
kwargs[‘num_reads’] = 50
if ‘answer_mode’ in sampler.parameters:
kwargs[‘answer_mode’] = ‘histogram’
response = sampler.sample(bqm_embedded, **kwargs)

convert back to the original problem space

response = dimod.unembed_response(response, embedding, source_bqm=bqm)

sampler.client.close()

log.debug(’embedding and sampling time: %s’, time.time() – sample_time)

 

“The industry is at an inflection point and we’ve moved beyond the theoretical, and into the practical era of quantum applications. It’s time to open this up to more smart, curious developers so they can build the first quantum killer app. Leap’s combination of immediate access to live quantum computers, along with tools, resources, and a community, will fuel that,” said Brownell. “For Leap’s future, we see millions of developers using this to share ideas, learn from each other and contribute open-source code. It’s that kind of collaborative developer community that we think will lead us to the first quantum killer app.”

The folks at D-Wave created a number of tutorials as well as a forum where users can learn and ask questions. The entire project is truly the first of its kind and promises unprecedented access to what amounts to the foreseeable future of computing. I’ve seen lots of technology over the years, and nothing quite replicated the strange frisson associated with plugging into a quantum computer. Like the teletype and green-screen terminals used by the early hackers like Bill Gates and Steve Wozniak, D-Wave has opened up a strange new world. How we explore it us up to us.

Powered by WPeMatico