hardware

Intel and Cray are building a $500 million ‘exascale’ supercomputer for Argonne National Lab

Posted by | cray, Gadgets, Government, hardware, Intel, science, supercomputers | No Comments

In a way, I have the equivalent of a supercomputer in my pocket. But in another, more important way, that pocket computer is a joke compared with real supercomputers — and Intel and Cray are putting together one of the biggest ever with a half-billion-dollar contract from the Department of Energy. It’s going to do exaflops!

The “Aurora” program aims to put together an “exascale” computing system for Argonne National Laboratory by 2021. The “exa” is prefix indicating bigness, in this case 1 quintillion floating point operations, or FLOPs. They’re kind of the horsepower rating of supercomputers.

For comparison, your average modern CPU does maybe a hundred or more gigaflops. A thousand gigaflops makes a teraflop, a thousand teraflops makes a petaflop, and a thousand petaflops makes an exaflop. So despite major advances in computing efficiency going into making super powerful smartphones and desktops, we’re talking several orders of magnitude difference. (Let’s not get into GPUs, it’s complicated.)

And even when compared with the biggest supercomputers and clusters out there, you’re still looking at a max of 200 petaflops (that would be IBM’s Summit, over at Oak Ridge National Lab) or thereabouts.

Just what do you need that kind of computing power for? Petaflops wouldn’t do it? Well, no, actually. One very recent example of computing limitations in real-world research was this study of how climate change could affect cloud formation in certain regions, reinforcing the trend and leading to a vicious cycle.

This kind of thing could only be estimated with much coarser models before; Computing resources were too tight to allow for the kind of extremely large number of variables involved here (or here — more clouds). Imagine simulating a ball bouncing on the ground — easy — now imagine simulating every molecule in that ball, their relationships to each other, gravity, air pressure, other forces — hard. Now imagine simulating two stars colliding.

The more computing resources we have, the more can be dedicated to, as the Intel press release offers as examples, “developing extreme-scale cosmological simulations, discovering new approaches for drug response prediction and discovering materials for the creation of more efficient organic solar cells.”

Intel says that Aurora will be the first exaflop system in the U.S. — an important caveat, since China is aiming to accomplish the task a year earlier. There’s no reason to think they won’t achieve it, either, since Chinese supercomputers have reliably been among the fastest in the world.

If you’re curious what ANL may be putting its soon-to-be-built computers to work for, feel free to browse its research index. The short answer is “just about everything.”

Powered by WPeMatico

Apple launches new iPad Air and iPad mini

Posted by | Apple, Gadgets, hardware, iPad, ipad air, iPad Pro | No Comments

Apple has refreshed its iPad lineup. The company is (finally) updating the iPad mini and adding a new iPad Air. This model sits between the entry-level 9.7-inch iPad and the 11-inch iPad Pro in the lineup.

All new models now support the Apple Pencil, but you might want to double-check your iPad model before buying one. The new iPad models released today work with the first-gen Apple Pencil, not the new Apple Pencil that supports magnetic charging and pairing.

So let’s look at those new iPads. First, the iPad mini hasn’t been refreshed in three and a half years. Many people believed that Apple would simply drop the model as smartphones got bigger. But the iPad mini is making a surprise comeback.

pic.twitter.com/Iqz1MHTg1p

— Tim Cook (@tim_cook) March 18, 2019

It looks identical to the previous 2015 model. But everything has been updated inside the device. It now features an A12 chip (the system on a chip designed for the iPhone XS), a 7.9-inch display that is 25 percent brighter, a wider range of colors and works with True Tone. And it also works with the Apple Pencil.

Unlike with the iPad Pro, the iPad mini still features a Touch ID fingerprint sensor, a Lightning port and a headphone jack. You can buy it today for $399 for 64GB. You can choose to pay more for 256GB of storage and cellular connectivity. It comes in silver, space gray and gold.

Second, the iPad Air. While the name sounds familiar, this is a new device in the iPad lineup. When Apple introduced the new iPad Pro models back in October, Apple raised the prices on this segment of the market.

This new iPad Air is a bit cheaper than the 11-inch iPad Pro and looks more or less like the previous generation 10.5-inch iPad Pro — I know, it’s confusing. The iPad Air now features an A12 chip, which should represent a significant upgrade over the previous-generation iPad Pro that featured an A10X. The iPad Air works with the Smart Keyboard.

You can buy the device today for $499 with 64GB of storage. You can choose to pay more for 256GB of storage and cellular connectivity. It comes in silver, space gray and gold.

The $329 iPad with a 9.7-inch display hasn’t been updated today. It still features an A10 chip, 64GB of storage and a display without True Tone technology or a wider range of colors.

 

Powered by WPeMatico

Turtle Beach is buying fellow gaming accessory maker Roccat

Posted by | Gaming, hardware, M&A, roccat, Turtle Beach | No Comments

There was a nice surprise morsel for those following Turtle Beach’s financials this week. In addition to a “record fourth quarter,” the headset maker announced that it has agreed to purchase fellow gaming peripheral company Roccat for $14.8 million in cash.

Turtle Beach is best known for creating gaming headsets for a wide range of different consoles, PCs and mobile devices. Picking up Germany-based Roccat will help the San Diego company further expand into additional peripherals like mice and keyboards. Turtle Beach is also hoping it will help expand its primarily U.S. and Europe-based sales into Asia, where Roccat has already made a dent.

In a press release tied to the news, Turtle Beach CEO Juergen Stark calls the deal “a key step in achieving our goal of building a $100 million PC gaming accessories business in the coming years.”

The complementary nature of the two companies’ product portfolios should certainly go a ways toward helping expand Turtle Beach’s brand. No word, however, on whether the company will continue to maintain the Roccat line in those markets where it’s already found some traction. Certainly that would make a lot of sense in the short term.

Turtle Beach expects the deal to close in Q2.

Powered by WPeMatico

Apple ad focuses on iPhone’s most marketable feature — privacy

Posted by | Apple, computing, digital media, digital rights, Facebook, hardware, human rights, identity management, iPhone, law, Mobile, privacy, TC, terms of service, Tim Cook, United States | No Comments

Apple is airing a new ad spot in primetime today. Focused on privacy, the spot is visually cued, with no dialog and a simple tagline: Privacy. That’s iPhone.

In a series of humorous vignettes, the message is driven home that sometimes you just want a little privacy. The spot has only one line of text otherwise, and it’s in keeping with Apple’s messaging on privacy over the long and short term. “If privacy matters in your life, it should matter to the phone your life is on.”

The spot will air tonight in primetime in the U.S. and extend through March Madness. It will then air in select other countries.

You’d have to be hiding under a rock not to have noticed Apple positioning privacy as a differentiating factor between itself and other companies. Beginning a few years ago, CEO Tim Cook began taking more and more public stances on what the company felt to be your “rights” to privacy on their platform and how that differed from other companies. The undercurrent being that Apple was able to take this stance because its first-party business relies on a relatively direct relationship with customers who purchase its hardware and, increasingly, its services.

This stands in contrast to the model of other tech giants like Google or Facebook that insert an interstitial layer of monetization strategy on top of that relationship in the forms of application of personal information about you (in somewhat anonymized fashion) to sell their platform to advertisers that in turn can sell to you better.

Turning the ethical high ground into a marketing strategy is not without its pitfalls, though, as Apple has discovered recently with a (now patched) high-profile FaceTime bug that allowed people to turn your phone into a listening device, Facebook’s manipulation of App Store permissions and the revelation that there was some long overdue house cleaning needed in its Enterprise Certificate program.

I did find it interesting that the iconography of the “Private Side” spot very, very closely associates the concepts of privacy and security. They are separate, but interrelated, obviously. This spot says these are one and the same. It’s hard to enforce privacy without security, of course, but in the mind of the public I think there is very little difference between the two.

The App Store itself, of course, still hosts apps from Google and Facebook among thousands of others that use personal data of yours in one form or another. Apple’s argument is that it protects the data you give to your phone aggressively by processing on the device, collecting minimal data, disconnecting that data from the user as much as possible and giving users as transparent a control interface as possible. All true. All far, far better efforts than the competition.

Still, there is room to run, I feel, when it comes to Apple adjudicating what should be considered a societal norm when it comes to the use of personal data on its platform. If it’s going to be the absolute arbiter of what flies on the world’s most profitable application marketplace, it might as well use that power to get a little more feisty with the bigcos (and littlecos) that make their living on our data.

I mention the issues Apple has had above not as a dig, though some might be inclined to view Apple integrating privacy with marketing as boldness bordering on hubris. I, personally, think there’s still a major difference between a company that has situational loss of privacy while having a systemic dedication to privacy and, well, most of the rest of the ecosystem which exists because they operate an “invasion of privacy as a service” business.

Basically, I think stating privacy is your mission is still supportable, even if you have bugs. But attempting to ignore that you host the data platforms that thrive on it is a tasty bit of prestidigitation.

But that might be a little too verbose as a tagline.

Powered by WPeMatico

Tiny claws let drones perch like birds and bats

Posted by | artificial intelligence, biomimesis, biomimetic, drones, Gadgets, hardware, robotics, science | No Comments

Drones are useful in countless ways, but that usefulness is often limited by the time they can stay in the air. Shouldn’t drones be able to take a load off too? With these special claws attached, they can perch or hang with ease, conserving battery power and vastly extending their flight time.

The claws, created by a highly multinational team of researchers I’ll list at the end, are inspired by birds and bats. The team noted that many flying animals have specially adapted feet or claws suited to attaching the creature to its favored surface. Sometimes they sit, sometimes they hang, sometimes they just kind of lean on it and don’t have to flap as hard.

As the researchers write:

In all of these cases, some suitably shaped part of the animal’s foot interacts with a structure in the environment and facilitates that less lift needs to be generated or that power flight can be completely suspended. Our goal is to use the same concept, which is commonly referred to as “perching,” for UAVs [unmanned aerial vehicles].

“Perching,” you say? Go on…

We designed a modularized and actuated landing gear framework for rotary-wing UAVs consisting of an actuated gripper module and a set of contact modules that are mounted on the gripper’s fingers.

This modularization substantially increased the range of possible structures that can be exploited for perching and resting as compared with avian-inspired grippers.

Instead of trying to build one complex mechanism, like a pair of articulating feet, the team gave the drones a set of specially shaped 3D-printed static modules and one big gripper.

The drone surveys its surroundings using lidar or some other depth-aware sensor. This lets it characterize surfaces nearby and match those to a library of examples that it knows it can rest on.

Squared-off edges like those on the top right can be rested on as in A, while a pole can be balanced on as in B.

If the drone sees and needs to rest on a pole, it can grab it from above. If it’s a horizontal bar, it can grip it and hang below, flipping up again when necessary. If it’s a ledge, it can use a little cutout to steady itself against the corner, letting it shut off or all its motors. These modules can easily be swapped out or modified depending on the mission.

I have to say the whole thing actually seems to work remarkably well for a prototype. The hard part appears to be the recognition of useful surfaces and the precise positioning required to land on them properly. But it’s useful enough — in professional and military applications especially, one suspects — that it seems likely to be a common feature in a few years.

The paper describing this system was published in the journal Science Robotics. I don’t want to leave anyone out, so it’s by: Kaiyu Hang, Ximin Lyu, Haoran Song, Johannes A. Stork , Aaron M. Dollar, Danica Kragic and Fu Zhang, from Yale, the Hong Kong University of Science and Technology, the University of Hong Kong, and the KTH Royal Institute of Technology.

Powered by WPeMatico

Opportunity’s last Mars panorama is a showstopper

Posted by | Gadgets, Government, hardware, jpl, mars, mars rover, mars rovers, NASA, Opportunity, science, Space, TC | No Comments

The Opportunity Mars Rover may be officially offline for good, but its legacy of science and imagery is ongoing — and NASA just shared the last (nearly) complete panorama the robot sent back before it was blanketed in dust.

After more than 5,000 days (or rather sols) on the Martian surface, Opportunity found itself in Endeavour Crater, specifically in Perseverance Valley on the western rim. For the last month of its active life, it systematically imaged its surroundings to create another of its many impressive panoramas.

Using the Pancam, which shoots sequentially through blue, green and deep red (near-infrared) filters, it snapped 354 images of the area, capturing a broad variety of terrain as well as bits of itself and its tracks into the valley. You can click the image below for the full annotated version.

It’s as perfect and diverse an example of the Martian landscape as one could hope for, and the false-color image (the flatter true-color version is here) has a special otherworldly beauty to it, which is only added to by the poignancy of this being the rover’s last shot. In fact, it didn’t even finish — a monochrome region in the lower left shows where it needed to add color next.

This isn’t technically the last image the rover sent, though. As the fatal dust storm closed in, Opportunity sent one last thumbnail for an image that never went out: its last glimpse of the sun.

After this the dust cloud so completely covered the sun that Opportunity was enveloped in pitch darkness, as its true last transmission showed:

All the sparkles and dots are just noise from the image sensor. It would have been complete dark — and for weeks on end, considering the planetary scale of the storm.

Opportunity had a hell of a good run, lasting and traveling many times what it was expected to and exceeding even the wildest hopes of the team. That right up until its final day it was capturing beautiful and valuable data is testament to the robustness and care with which it was engineered.

Powered by WPeMatico

SpaceX makes history by completing first private crew capsule mission

Posted by | commercial crew, Gadgets, Government, hardware, international space station, ISS, NASA, Space, SpaceX | No Comments

SpaceX’s Crew Dragon capsule has safely splashed down in the Atlantic, making it the first privately built crew-capable spacecraft ever to complete a mission to the International Space Station. It’s one of several firsts SpaceX plans this year, but Boeing is hot on its heels with a crew demonstrator of its own — and of course the real test is doing the same thing with astronauts aboard.

This mission, Demo-1, had SpaceX showing that its Crew Dragon capsule, an evolution of the cargo-bearing Dragon that has made numerous ISS deliveries, was complete and ready to take on its eponymous crew.

It took off early in the morning of March 2 (still March 1 on the West coast), circled the Earth 18 times, and eventually came to a stop (relatively speaking, of course) adjacent to the ISS, after which it approached and docked with the new International Docking Adapter. The 400 pounds of supplies were emptied, but the “anthropomorphic test device” known as Ripley — basically a space crash test dummy — stayed in her seat on board.

(It’s also worth noting that the Falcon 9 first stage that took the capsule to the edge of the atmosphere landed autonomously on a drone ship.)

Five days later — very early this morning — the craft disengaged from the ISS and began the process of deorbiting. It landed on schedule at about 8:45 in the morning Eastern time.

It’s a huge validation of NASA’s Commercial Crew Program, and of course a triumph for SpaceX, which not only made and launched a functioning crew spacecraft, but did so before its rival Boeing. That said, it isn’t winner take all — the two spacecraft could very well exist in healthy competition as crewed missions to space become more and more common.

Expect to see a report on the mission soon after SpaceX and NASA have had time to debrief and examine the craft (and Ripley).

Powered by WPeMatico

Leica’s Q2 is a beautiful camera that I want and will never have

Posted by | cameras, Gadgets, hardware, leica, leica q2, Photography | No Comments

Leica is a brand I respect and appreciate but don’t support. Or rather, can’t, because I’m not fabulously rich. But if I did have $5,000 to spend on a fixed-lens camera, I’d probably get the new Q2, a significant improvement over 2015’s Q — which tempted me back then.

The Q2 keeps much of what made the Q great: a full-frame sensor, a fabulous 28mm F/1.7 Summilux lens, and straightforward operation focused on getting the shot. But it also makes some major changes that make the Q2 a far more competitive camera.

The sensor has jumped from 24 to 47 megapixels, and while we’re well out of the megapixel race, that creates the opportunity for a very useful cropped shooting mode that lets you shoot at 35, 50, and 75mm equivalents while still capturing huge pixel counts. It keeps the full frame exposure as well so you can tweak the crop later. The new sensor also has a super low native ISO of 50, which should help with dynamic range and in certain exposure conditions.

Autofocus has been redone as well (as you might expect with a new sensor) and it should be quicker and more accurate now. Ther’s also an optical stabilization mode that kicks in when you are shooting at under 1/60s. Both features that need a little testing to verify they’re as good as they sound, but I don’t expect they’re fraudulent or anything.

The body, already a handsome minimal design in keeping with Leica’s impeccable (if expensive) taste, is now weather sealed, making this a viable walk-around camera in all conditions. Imagine paying five grand for a camera and being afraid to take it out in the rain! Well, many people did that and perhaps will feel foolish now that the Q2 has arrived.

Inside is an electronic viewfinder, but the 2015 Q had a sequential-field display — meaning it flashes rapidly through the red, green, and blue components of the image — which made it prone to color artifacts in high-motion scenes or when panning. The Q2, however, has a shiny new OLED display with the same resolution but better performance. OLEDs are great for EVFs for a lot of reasons, but I like that you get really nice blacks, like in an optical viewfinder.

The button layout has been simplified as well (or rather synchronized with the CL, another Leica model), with a new customizable button on the top plate, reflecting the trend of personalization we’ve seen in high-end cameras. A considerably larger battery and redesigned battery and card door rounds out the new features.

As DPReview points out in its hands-on preview of the camera, the Q2 is significantly heavier than the high-end fixed-lens competition (namely the Sony RX1R II and Fuji X100F, both excellent cameras), and also significantly more expensive. But unlike many Leica offerings, it actually outperforms them in important ways: the lens, the weather sealing, the burst speed — it may be expensive, but you actually get something for your money. That can’t always be said of this brand.

The Leica Q2 typifies the type of camera I’d like to own: no real accessories, nothing to swap in or out, great image quality and straightforward operation. I’m far more likely to get an X100F (and even then it’d be a huge splurge) but all that time I’ll be looking at the Q2 with envious eyes. Maybe I’ll get to touch one some day.

Powered by WPeMatico

Galaxy S10 takes the ‘best smartphone display’ crown

Posted by | Gadgets, galaxy s10, hardware, Mobile, Samsung, smartphones | No Comments

As you may have gathered from our review of Samsung’s Galaxy S10, it’s a very solid phone with lots of advanced features. But one thing that’s especially difficult to test is the absolute quality of the display — which is why we leave that part to the experts. And this expert says the S10’s screen is the best ever on a smartphone.

Ray Soneira has tested every major phone, tablet and laptop series for many a year, using all the cool color calibration, reflectance and brightness measurement and other gear that goes with the job. So when he says the S10’s display is “absolutely stunning and Beautiful,” with a capital B at that, it’s worth taking note.

OLED technology has advanced a great deal since the first one I encountered, on the Zune HD — which still works and looks great, by the way, thank you. But originally it had quite a few trade-offs compared with LCD panels, such as weird color casts or pixel layout issues. Samsung has progressed well beyond that and OLED has come into its own with a vengeance. As Ray puts it:

The Absolute Color Accuracy on the Galaxy S10 is the Most Color Accurate Display we have ever measured. It is Visually Indistinguishable From Perfect, and almost certainly considerably better than your existing Smartphone, living room HDTV, Tablet, Laptop, and computer monitor, as demonstrated in our extensive Absolute Color Accuracy Lab Measurements.

The very challenging set of DisplayMate Test and Calibration Photos that we use to evaluate picture quality looked absolutely stunning and Beautiful, even to my experienced hyper-critical eyes.

Make sure you switch the phone’s display to “natural mode,” which makes subtle changes to the color space depending on the content and ambient light.

And although he has enthused many times before about the quality of various displays and the advances they made over their predecessors, the above is certainly very different language from, for example, how he described the reigning champ until today — the iPhone X:

Apple has produced an impressive Smartphone display with excellent performance and accuracy, which we cover in extensive detail below. What makes the iPhone X the Best Smartphone Display is the impressive Precision Display Calibration Apple developed, which transforms the OLED hardware into a superbly accurate, high performance, and gorgeous display, with close to Text Book Perfect Calibration and Performance!!

High praise, but not quite falling all over himself, as he did with the S10. As you can see, I rate smartphone displays chiefly by the emotional response they evoke from Ray Soneira.

At this point, naturally, the gains from improving displays are fairly few, because, to be honest, not many people care or can even tell today’s flagship displays apart. But little touches like front and back sensors for ambient light detection, automatic calibration and brightness that take user preferences into account — these also improve the experience, and phone makers have been adding them at a good clip, as well.

No matter which flagship phone you buy today, it’s going to have a fantastic camera and screen — but if you like to see it all in black and white, read through the review and you’ll find your hopes justified.

Powered by WPeMatico

Koala-sensing drone helps keep tabs on drop bear numbers

Posted by | artificial intelligence, Australia, Computer Vision, conservation, drones, Gadgets, hardware, machine learning, science, TC, UAVs | No Comments

It’s obviously important to Australians to make sure their koala population is closely tracked — but how can you do so when the suckers live in forests and climb trees all the time? With drones and AI, of course.

A new project from Queensland University of Technology combines some well-known techniques in a new way to help keep an eye on wild populations of the famous and soft marsupials. They used a drone equipped with a heat-sensing camera, then ran the footage through a deep learning model trained to look for koala-like heat signatures.

It’s similar in some ways to an earlier project from QUT in which dugongs — endangered sea cows — were counted along the shore via aerial imagery and machine learning. But this is considerably harder.

A koala

“A seal on a beach is a very different thing to a koala in a tree,” said study co-author Grant Hamilton in a news release, perhaps choosing not to use dugongs as an example because comparatively few know what one is.

“The complexity is part of the science here, which is really exciting,” he continued. “This is not just somebody counting animals with a drone, we’ve managed to do it in a very complex environment.”

The team sent their drone out in the early morning, when they expected to see the greatest contrast between the temperature of the air (cool) and tree-bound koalas (warm and furry). It traveled as if it was a lawnmower trimming the tops of the trees, collecting data from a large area.

Infrared image, left, and output of the neural network highlighting areas of interest

This footage was then put through a deep learning system trained to recognize the size and intensity of the heat put out by a koala, while ignoring other objects and animals like cars and kangaroos.

For these initial tests, the accuracy of the system was checked by comparing the inferred koala locations with ground truth measurements provided by GPS units on some animals and radio tags on others. Turns out the system found about 86 percent of the koalas in a given area, considerably better than an “expert koala spotter,” who rates about a 70. Not only that, but it’s a whole lot quicker.

“We cover in a couple of hours what it would take a human all day to do,” Hamilton said. But it won’t replace human spotters or ground teams. “There are places that people can’t go and there are places that drones can’t go. There are advantages and downsides to each one of these techniques, and we need to figure out the best way to put them all together. Koalas are facing extinction in large areas, and so are many other species, and there is no silver bullet.”

Having tested the system in one area of Queensland, the team is now going to head out and try it in other areas of the coast. Other classifiers are planned to be added as well, so other endangered or invasive species can be identified with similar ease.

Their paper was published today in the journal Nature Scientific Reports.

Powered by WPeMatico