Intel

In-game app-development platform Overwolf acquires CurseForge assets from Twitch to get into mods

Posted by | computing, CurseForge, gamer, Gaming, Intel, Intel Capital, Liberty-Media, M&A, Minecraft mods, mod, Overwolf, Roblox, TC, telecommunications, Twitch, twitch tv, video games, video gaming, video hosting, Wikia | No Comments

Overwolf, the in-game app-development toolkit and marketplace, has acquired Twitch’s CurseForge assets to provide a marketplace for modifications to complement its app development business.

Since its launch in 2009, developers have used Overwolf to build in-game applications for things like highlight clips, game-performance monitoring and metrics, and strategic analysis. Some of these developers have managed to earn anywhere between $100,000 and $1 million per year off revenue from app sales.

“CurseForge is the embodiment of how fostering a community of creators around games generates value for both players and game developers,” said Uri Marchand, Overwolf’s chief executive officer, in a statement. “As we move to onboard mods onto our platform, we’re positioning Overwolf as the industry standard for building in-game creations.”

It wouldn’t be a stretch to think of the company as the Roblox for applications for gamers, and now it’s moving deeper into the gaming world with the acquisition of CurseForge. As the company makes its pitch to current CurseForge users — hoping that the mod developers will stick with the marketplace, they’re offering to increase by 50% the revenue those developers will make.

Overwolf said it has around 30,000 developers who have built 90,000 mods and apps, on its platform already.

As a result of the acquisition, the CurseForge mod manager will move from being a Twitch client and become a standalone desktop app included in Overwolf’s suite of app offerings, and the acquisition won’t have any effect on existing tools and services.

“We’ve been deeply impressed by the level of passion and collaboration in the CurseForge modding community,” said Tim Aldridge, director of Engineering, Gaming Communities at Twitch. “CurseForge is an incredible asset for both creators and gamers. We are confident that the CurseForge community will thrive under Overwolf’s leadership, thanks to their commitment to empowering developers.”

The acquisition comes two years after Overwolf raised $16 million in a round of financing from Intel Capital, which had also partnered with the company on a $7 million fund to invest in app and mod developers for popular games.

“Overwolf’s position as a platform that serves millions of gamers, coupled with its partnership with top developers, means that Intel’s investment will convert into more value for PC gamers worldwide,” said John Bonini, VP and GM of VR, Esports and Gaming at Intel, in a statement at the time. “Intel has always prioritized gamers with high performance, industry-leading hardware. This round of investment in Overwolf advances Intel’s vision to deliver a holistic PC experience that will enhance the ways people interact with their favorite games on the software side as well.”

Other investors in the company include Liberty Technology Venture Capital, the investment arm of the media and telecommunications company, Liberty Media.

Powered by WPeMatico

Apple could reportedly announce Mac shift to its own ARM-based chips this month

Posted by | Apple Worldwide Developers Conference, ARM, computing, Gadgets, hardware, Intel, ios devices, iPad, iPhone, mac, mac os, mac pro, macintosh, operating systems, Steve Jobs, TC | No Comments

For years now, analysts and unconfirmed reports have suggested Apple was working on transitioning its Mac line of computers away from Intel -based chips, and to its own, ARM-based processors. Now, Bloomberg reports that the company could make those plans official as early as later this month, with an announcement potentially timed for its remote Worldwide Developers Conference (WWDC) happening the week of June 22.

Apple has historically made a number of announcements at WWDC, including providing forward-looking information about its software roadmap, like upcoming versions of macOS and iOS, in order to help developers prepare their software for the updates’ general public availability. WWDC has also provided a venue for a number of Mac hardware announcements over the years, including reveals of new MacBooks and iMacs.

Bloomberg says this potential reveal of its plan to transition to ARM-based Macs would be an advance notice, however — it would not include a reveal of any immediately available hardware, but would act as an advance notice to developers to give them time to prepare their software for ARM-based Macs to be released in 2021. The report cautions that the timing of the announcement could change, however, given that there are no plans to actually introduce any ARM-based Mac hardware for many months at least.

This isn’t the first major processor architecture switch that Apple’s Mac lineup has undergone; the company moved from PowerPC-based CPUs to Intel in 2006. That switch was originally announced in 2005, at Apple’s WWDC event that year — giving developers around half-a-year advance notice to ready themselves for the transition.

Bloomberg reported in April that Apple was planning to start selling ARM-based Macs by next year, and was developing three different in-house Mac processors based on the architecture to power those machines. Apple has made its own ARM-based processors to power iOS devices, including the iPhone and iPad for many generations now, and its expertise means that those chips are now much more power efficient, and powerful in most respects, than the Intel chips it sources for its Mac line.

Powered by WPeMatico

Arm’s financials and the blurring future of the semiconductor sector

Posted by | ARM Holdings, hardware, Intel, Masayoshi Son, Mobile, Simon Segars, Softbank | No Comments

Amidst the blitz of SoftBank earnings news today comes the financials for all of SoftBank’s subsidiaries, which includes Arm Holdings, the most important chip design and research company in the world that SoftBank bought for $32 billion back in 2016. Arm produces almost all of the key designs for the chips that run today’s smartphones, including Apple’s A13 Bionic chip that powers its flagship iPhone. In all, 22.8 billion chips were shipped globally last year using Arm licenses according to SoftBank’s financials.

It’s a massively important company, and its finances show a complicated picture for itself — and the semiconductor industry at large.

We sat down with Arm Holding’s CEO Simon Segars last year to discuss the company’s growing appetite for ambitious research, fueled by SoftBank dollars and the bullish vision of the conglomerate’s chairman Masayoshi Son:

Powered by WPeMatico

Apple said to sell Macs powered by in-house ARM-based chips as early as 2021

Posted by | Apple, ARM, ARM chips, computers, computing, Gadgets, hardware, Intel, iPad, iPads, iPhone, mac, macintosh, macos, PowerPc, Steve Jobs, system on a chip, TC | No Comments

Apple’s long-rumored Mac ARM chip transition could happen as early as next year, according to a new report from Bloomberg. The report says that Apple is currently working on three Mac processors based on the design of the A14 system-on-a-chip that will power the next-generation iPhone. The first of the Mac versions will greatly exceed the speed of the iPhone and iPad processors, according to the report’s sources.

Already, Apple’s A-series line of ARM-based chips for iPhones and iPads have been steadily improving, to the point where their performance in benchmark tests regularly exceeds that of Intel processors used currently in Apple’s Mac line. As a result, and because Intel’s chip development has encountered a few setbacks and slowdowns in recent generations, rumors that Apple would move to using its own ARM-based designs have multiplied over the past few years.

Bloomberg says that “at least one Mac” powered by Apple’s own chip is being prepared for release in 2021, to be built by chip fabricator and longtime Apple partner Taiwan Semiconductor Manufacturing Co. (TSMC). The first of these chips to power Macs will have at least 12 cores, including eight designed for high-performance applications, and four designed for lower-intensity activities with battery-preserving energy efficiency characteristics. Current Intel designs that Apple employs in devices such as the MacBook Air have four or even two cores, by comparison.

Initially, the report claims Apple will focus on using the chips to power a new Mac design, leaving Intel processors in its higher-end pro level Macs, because the ARM-based designs, while more performant on some scores, can’t yet match the top-end performance of Intel-based chip technology. ARM chips generally provide more power efficiency at the expense of raw computing power, which is why they’re so frequently used in mobile devices.

The first ARM-based Macs will still run macOS, per Bloomberg’s sources, and Apple will seek to make them compatible with software that works on current Intel-based Macs as well. That would be a similar endeavor to when Apple switched from using PowerPC-based processors to Intel chips for its Mac lineup in 2006, so the company has some experience in this regard. During that transition, Apple announced initially that the switch would take place between 2006 and 2007, but accelerated its plans so that all new Macs shipping by the end of 2006 were powered by Intel processors.

Powered by WPeMatico

Google said to be preparing its own chips for use in Pixel phones and Chromebooks

Posted by | Apple, Assistant, chrome os, chromebook, computers, computing, Gadgets, Google, hardware, Intel, iPhone, laptops, mac, machine learning, photo processing, PIXEL, Qualcomm, Samsung, smartphone, smartphones, TC | No Comments

Google is reportedly on the verge of stepping up their hardware game in a way that follows the example set by Apple, with custom-designed silicon powering future smartphones. Axios reports that Google is readying its own in-house processors for use in future Pixel devices, including both phones and eventually Chromebooks, too.

Google’s efforts around its own first-party hardware have been somewhat of a mixed success, with some generations of Pixel smartphone earning high praise, including for its work around camera software and photo processing. But it has used standard Qualcomm processors to date, whereas Apple has long designed its own custom processor (the A-series) for its iPhone, providing the Mac-maker an edge when it comes to performance tailor-made for its OS and applications.

The Axios report says that Google’s in-house chip is code-named “Whitechapel,” and that it was made in collaboration with Samsung and uses that company’s 5-nanometer process. It includes an 8-core ARM-based processor, as well as dedicated on-chip resources for machine learning and Google Assistant.

Google has already taken delivery of the first working prototypes of this processor, but it’s said to be at least a year before they’ll be used in actual shipping Pixel phones, which means we likely have at least one more generation of Pixel that will include a third-party processor. The report says that this will eventually make its way to Chromebooks, too, if all goes to plan, but that that will take longer.

Rumors have circulated for years now that Apple would eventually move its own Mac line to in-house, ARM-based processors, especially as the power and performance capabilities of its A-series chips has scaled and surpassed those of its Intel equivalents. ARM-based Chromebooks already exist, so that could make for an easier transition on the Google side – provided the Google chips can live up to expectations.

Powered by WPeMatico

Canonical’s Anbox Cloud puts Android in the cloud

Posted by | Android, canonical, chrome os, Cloud, engineer, Enterprise, Google, Intel, linus torvalds, linux, operating system, operating systems, Ubuntu | No Comments

Canonical, the company behind the popular Ubuntu Linux distribution, today announced the launch of Anbox Cloud, a new platform that allows enterprises to run Android in the cloud.

On Anbox Cloud, Android becomes the guest operating system that runs containerized applications. This opens up a range of use cases, ranging from bespoke enterprise apps to cloud gaming solutions.

The result is similar to what Google does with Android apps on Chrome OS, though the implementation is quite different and is based on the LXD container manager, as well as a number of Canonical projects like Juju and MAAS for provisioning the containers and automating the deployment. “LXD containers are lightweight, resulting in at least twice the container density compared to Android emulation in virtual machines – depending on streaming quality and/or workload complexity,” the company points out in its announcements.

Anbox itself, it’s worth noting, is an open-source project that came out of Canonical and the wider Ubuntu ecosystem. Launched by Canonical engineer Simon Fels in 2017, Anbox runs the full Android system in a container, which in turn allows you to run Android application on any Linux-based platform.

What’s the point of all of this? Canonical argues that it allows enterprises to offload mobile workloads to the cloud and then stream those applications to their employees’ mobile devices. But Canonical is also betting on 5G to enable more use cases, less because of the available bandwidth but more because of the low latencies it enables.

“Driven by emerging 5G networks and edge computing, millions of users will benefit from access to ultra-rich, on-demand Android applications on a platform of their choice,” said Stephan Fabel, director of Product at Canonical, in today’s announcement. “Enterprises are now empowered to deliver high performance, high density computing to any device remotely, with reduced power consumption and in an economical manner.”

Outside of the enterprise, one of the use cases that Canonical seems to be focusing on is gaming and game streaming. A server in the cloud is generally more powerful than a smartphone, after all, though that gap is closing.

Canonical also cites app testing as another use case, given that the platform would allow developers to test apps on thousands of Android devices in parallel. Most developers, though, prefer to test their apps in real — not emulated — devices, given the fragmentation of the Android ecosystem.

Anbox Cloud can run in the public cloud, though Canonical is specifically partnering with edge computing specialist Packet to host it on the edge or on-premise. Silicon partners for the project are Ampere and Intel .

Powered by WPeMatico

Intel and Google plot out closer collaboration around Chromebooks and the future of computing

Posted by | Apple, CES 2020, chrome os, chromebook, computing, Gadgets, GM, Google, google-chrome, hardware, Intel, laptops, Samsung, smartphone, TC | No Comments

Intel, the chip-making giant, has been on the road of refocusing its strategy in recent months. While it has sold its mobile chip operation to Apple and is reportedly looking for a buyer for its connected home division, it’s also been going through the difficult task of rethinking how best to tackle the longtime bread and butter of its business, the PC.

Part of that latter strategy is getting a big boost this week at CES 2020. Here, Intel is today announcing a deeper partnership with Google to design chips and specifications for Chromebooks built on Project Athena. Project Athena is framework first announced last year that covers both design and technical specs, with the aim of building the high-performance laptops of tomorrow that can be used not just for work, but media streaming, gaming, enterprise applications and more, all on the go — powered by Intel, naturally.

(The specs include things like requiring ‘fast wake’ using fingerprints or push-buttons or lift lids; using Intel Core i5 or i7 processors; “Ice Lake” processor designs; better battery life and charging; WiFi 6; touch displays; 2-in-1 designs; narrow bezels and more.)

Earlier today, the first two Chromebooks built on those Athena specifications — from Samsung and Asus — were announced by the respective companies, and Intel says that there will be more to come. And on stage, Google joined Intel during its keynote to also cement the two companies’ commitment to the mission.

“We’re going a step further and deepening our partnership with Google to bring Athena to Chromebooks,” Gregory Bryant, the EVP and GM of Intel’s client computing group, said in an interview with TechCrunch ahead of today. “We’ve collaborated very closely with Google [so that device makers] can take advantage of these specs.”

For Intel, having a Chromebook roster using Athena is important because these have been very popular, and it brings its processors into machines used by people who are buying Chromebooks to get access to Google services around security and more, and its apps ecosystem.

But stepping up the specifications for Chromebooks is as important for Google as it is for Intel in terms of the bottom line and growing business.

“This is a significant change for Google,” said John Solomon, Google’s VP of ChromeOS, in an interview ahead of today. “Chromebooks were successful in the education sector initially, but in the next 18 months to two years, our plan is to go broader, expanding to consumer and enterprise users. Those users have greater expectations and a broader idea of how to use these devices. That puts the onus on us to deliver more performance.”

The renewed effort comes at an interesting time. The laptop market is in a generally tight spot these days. Overall, the personal computing market is in a state of decline, and forecast to continue that way for the next several years.

But there is a slightly brighter picture for the kinds of machines that are coming out of collaborations like the one between Intel, Google, and their hardware partners: IDC forecasts that 2-in-1 devices — by which it means convertible PCs and detachable tablets — and ultra-slim notebook PCs “are expected to grow 5% collectively over the same period,” versus a compound annual growth rate of -2.4% between 2019 and 2023. So there is growth, but not a huge amount.

Up against that is the strength of the smartphone market. Granted, it, too, is facing some issues as multiple markets reach smartphone saturation and consumers are slower to upgrade.

All that is to say that there are challenges. And that is why Intel, whose fortunes are so closely linked to those of personal computing devices since it makes the processors for them, has to make a big push around projects like Athena.

Up to this month, all of the laptops built to Athena specs have been Windows PCs — 25 to date — but Intel had always said from the start Chromebooks would be part of the mix, to help bring the total number of Athena-based devices up to 75 by the end of this year (adding 50 in 2020).

Chromebooks are a good area for Intel to be focusing on, as they seem to be outpacing growth for the wider market, despite some notable drawbacks about how Chrome OS has been conceived as a “light” operating system with few native tools and integrations in favor of apps. IDC said that in Q4 of 2019, growth was 19% year-on-year,  and from what I understand the holiday period saw an even stronger rise. In the US, Chromebooks had a market share of around 27% last November, according to NPD/Gfk.

What’s interesting is the collaborative approach that Intel — and Google — are taking to grow. The Apple -style model is to build vertical integration into its hardware business to ensure a disciplined and unified approach to form and function: the specifications of the hardware are there specifically to handle the kinds of services that Apple itself envisions to work on its devices, and in turn, it hands down very specific requirements to third parties to work on those devices when they are not services and apps native to Apple itself.

While Google is not in the business of building laptops or processors (yet?), and Intel is also far from building more than just processors, what the two have created here is an attempt at bringing a kind of disciplined specification that mimics what you might get in a vertically integrated business.

“It’s all about building the best products and delivering the best experience,” Bryant said.

“We can’t do what we do without Intel’s help and this close engineering collaboration over the last 18 months,” Solomon added. “This is the beginning of more to come in this space, with innovation that hasn’t previously been seen.”

Indeed, going forward, interestingly Bryant and Solomon wouldn’t rule out that Athena and their collaboration might extend beyond laptops.

“Our job is to make the PC great. If we give consumers value and a reason to buy a PC we can keep the PC alive,” said Bryant, but he added that Intel is continuing to evolve the specification, too.

“From a form factor you’ll see an expansion of devices that have dual displays or have diff kinds of technology and form factors,” he said. “Our intention is to expand and do variations on what we have shown today.”

CES 2020 coverage - TechCrunch

Powered by WPeMatico

Kid-focused STEM device startup Kano sees layoffs as it puts Disney e-device on ice

Posted by | Amazon, Barclays, China, Collaborative Fund, computing, Disney, Education, Europe, Gadgets, Google, hardware, harry potter, Intel, Kano, London, Marc Benioff, Microsoft, microsoft windows, TC, United States | No Comments

London-based STEM device maker Kano has confirmed it’s cutting a number of jobs which it claims is part of a restructuring effort to shift focus to “educational computing”.

The job cuts — from 65 to 50 staff — were reported earlier by The Telegraph. Kano founder Alex Stein confirmed in a call with TechCrunch that Kano will have 50 staff going into next year. Although he said the kid-focused learn to code device business is also adding jobs in engineering and design, as well as eliminating other roles as it shifts focus.

He also suggested some of the cuts are seasonal and cyclical — related to getting through the holiday season.

Per Stein, jobs are being taking out as the company moves from building atop the Raspberry Pi platform — where it started, back in 2013, with its crowdfunded DIY computer — to a Windows-based learning platform.

Other factors he pointed to in relation to the layoffs include a new manufacturing setup in China, with a “simpler, larger contract manufacturer”; fewer physical retail outlets to support, with Kano leaning more on Amazon (which he said is “cheaper to support”); fewer dependencies on large partners and agencies, with Stein claiming 18% of US parents with kids aged 6-12 are now familiar with the brand, reducing its marketing overhead; and a desire to shrink the number of corporate managers vs makers on its books as “we’ve seen a stronger response to our first-party Kano products — Computer Kit, Pixel Kit, Motion Sensor Kit — than expected this year”.

“We have brought on some roles that are more focused on this new platform [Kano PC], and some roles that were focused on the Raspberry Pi are no longer with us,” he also told TechCrunch.

Kano unveiled its first Windows-based PC this fall. The 11.6-inch touch-enabled, Intel Atom-powered computer costs $300 — which puts it in the ballpark price-range of Google’s Chromebook.

The tech giant has maintained a steady focus on the educational computing market — putting a competitive squeeze on smaller players like Kano who are trying to carve out a business selling their own brand of STEM-focused hardware. Against the Google Goliath, Stein touts factors such as relative repairability and attention to computing performance for the Kano PC (which he claims is “on a par with the Surface Go”), in addition to having now thrown its lot in with rival giant, Microsoft.

“The more and more we got into school environments the more and more we were in conversations with major North American distributors to schools, the more we saw that people wanted that ‘DIY’… product design, they wanted the hackability and extensibility of the kit, they wanted the tools to be open source and manipulable but they also wanted to be able to run Photoshop and to run Class Dashboard and to run Microsoft Office. And so that was when we struck the partnership with Microsoft,” said Stein.

“The Windows computing is packed with content and curriculum for teachers and an integration with Microsoft Teams which requires a different sort of development capability,” he added.

“The roles we’re adding are around subscription, they’re around the computer, building new applications and tools for the computer and continuing to enrich the number of projects that are available for our members now — so we’re doing things like allowing people to connect the sensors in their wands to household IoT device. We’re introducing, over the Christmas period, a new collaborative drawing app.”

According to Stein, Kano is “already seeing demand for 60,000 units in this next calendar year” for its Windows-based PC — which he said is “well beyond what we expect… given the price-point.

Although he did not put a figure on exact sales to date of the Kano PC.

He also confirmed Kano will be dialling back the range of products it offers next year.

It recently emerged that an own-brand camera device, which Kano first trailed back in 2016, will not now be shipping. Stein also told us that another co-branded Disney product they’d been planning for 2020 is being “put back” — with no new date for release as yet.

Stein denied sales have been lacklustre — claiming the current Star Wars and Frozen e-products have “done enough for us”. (While a co-branded Harry Potter e-wand is selling faster than expected, per Stein, who said they had expected to have stock until March but are “selling out”.)

“The reorganization we’ve done has nothing to do with growth and users,” he told us. “We are on track to sell through more units as well as products at a higher average selling price this fiscal year. We’re selling out of Wands when we expected to have stock all the way to March. We have more pre-launch demand for the Kano PC than anything we’ve ever done.”

Of the additional co-branded Disney e-product which is being delayed — and may not now launch at all next year, Stein told us: “The fact is we’re in negotiations with Disney around this — and around the timing of it. Given that we’re not certain we’re going to be doing it in 2020 some of the contractor roles in particular that we brought on to do the licensing sign off pieces, to develop some of the content around those brands, some of the apparatus set up to manage those partnerships — we don’t need any more.”

“We introduced three new hardware SKUs this year. I don’t think we’ll do three new hardware SKUs next year,” he added, confirming the intention is to trim the number of device launches in 2020 to focus on the Kano PC.

One source we spoke to suggested Kano is considering sunsetting its partner strategy entirely. However Stein did not go that far in his comments to us.

“We’ve been riding a certain bear for a few years. We’re jumping to a new bear. That’s always going to create a bit of exhilaration. But I think this is a place of real promise,” was how he couched the pivot.

“I think what Kano does better than anyone else in the world is crafting an experience around technology that opens up its attributes to a wider audience,” Stein also said when asked whether hardware or software will be its main focus going forward. “The hardware element is crucial and beautiful and we make some of the world’s most interesting dynamic physical products. It’s an often told story that hardware’s very hard and is brutal — and yeah, because you get it right you change the fabric of society.

“It’s hard for me to draw a line between hardware and software for the business because we’ve always been asked that and seven years into the business we’ve found the greatest things that people do with the products… it’s always when there’s a combination of the two. So we’re proud that we’re good at combining the two and we’re going to continue to do it.”

The STEM device space has been going through bumpy times in recent years as early hype and investment has failed to translate into sustained revenues at every twist and turn.

The category is certainly filled with challenges — from low barrier to entry leading to plentiful (if varied quality) competition, to the demands of building safe, robust and appealing products for (fickle) kids that tightly and reliably integrate hardware and software, to checking all the relevant boxes and processes to win over teachers and support schools’ curriculum requirements that’s essential for selling direct to the education market.

Given so many demands on STEM device makers it’s not surprising this year has seen a number of these startups exiting to other players and/or larger electronics makers — such as Sphero picking up littleBits.

A couple of years ago Sphero went through its own pivot out of selling co-branded Disney ‘learn to code’ gizmos to zoom in on the education space.

While another UK-based STEM device maker — pi-top — has also been through several rounds of layoffs recently, apparently as part of its own pivot to the US edtech market.

More consolidation in the category seems highly likely. And given the new relationship between Kano and Microsoft joining Redmond via acquisition may be the obvious end point for the startup.

Per the Telegraph’s report, Kano is in the process of looking to raise more funding. However Stein did not comment when asked to confirm the company’s funding situation.

The startup last reported a raise just over two years ago — when it closed a $28M Series B round led by Thames Trust and Breyer Capital. Index Ventures, the Stanford Engineering Venture Fund, LocalGlobe, Marc Benioff, John Makinson, Collaborative Fund, Triple Point Capital, and Barclays also participated.

TechCrunch’s Ingrid Lunden contributed to this report 

Powered by WPeMatico

Intel and Argonne National Lab on ‘exascale’ and their new Aurora supercomputer

Posted by | aurora, exascale, Gadgets, hardware, High Performance Computing, Intel, ponte vecchio, supercomputers, TC | No Comments

The scale of supercomputing has grown almost too large to comprehend, with millions of compute units performing calculations at rates requiring, for first time, the exa prefix — denoting quadrillions per second. How was this accomplished? With careful planning… and a lot of wires, say two people close to the project.

Having noted the news that Intel and Argonne National Lab were planning to take the wrapper off a new exascale computer called Aurora (one of several being built in the U.S.) earlier this year, I recently got a chance to talk with Trish Damkroger, head of Intel’s Extreme Computing Organization, and Rick Stevens, Argonne’s associate lab director for computing, environment and life sciences.

The two discussed the technical details of the system at the Supercomputing conference in Denver, where, probably, most of the people who can truly say they understand this type of work already were. So while you can read at industry journals and the press release about the nuts and bolts of the system, including Intel’s new Xe architecture and Ponte Vecchio general-purpose compute chip, I tried to get a little more of the big picture from the two.

It should surprise no one that this is a project long in the making — but you might not guess exactly how long: more than a decade. Part of the challenge, then, was to establish computing hardware that was leagues beyond what was possible at the time.

“Exascale was first being started in 2007. At that time we hadn’t even hit the petascale target yet, so we were planning like three to four magnitudes out,” said Stevens. “At that time, if we had exascale, it would have required a gigawatt of power, which is obviously not realistic. So a big part of reaching exascale has been reducing power draw.”

Intel’s supercomputing-focused Xe architecture is based on a 7-nanometer process, pushing the very edge of Newtonian physics — much smaller and quantum effects start coming into play. But the smaller the gates, the less power they take, and microscopic savings add up quickly when you’re talking billions and trillions of them.

But that merely exposes another problem: If you increase the power of a processor by 1000x, you run into a memory bottleneck. The system may be able to think fast, but if it can’t access and store data equally fast, there’s no point.

“By having exascale-level computing, but not exabyte-level bandwidth, you end up with a very lopsided system,” said Stevens.

And once you clear both those obstacles, you run into a third: what’s called concurrency. High performance computing is equally about synchronizing a task between huge numbers of computing units as it is about making those units as powerful as possible. The machine operates as a whole, and as such every part must communicate with every other part — which becomes something of a problem as you scale up.

“These systems have many thousands of nodes, and the nodes have hundreds of cores, and the cores have thousands of computation units, so there’s like, billion-way concurrency,” Stevens explained. “Dealing with that is the core of the architecture.”

How they did it, I, being utterly unfamiliar with the vagaries of high performance computing architecture design, would not even attempt to explain. But they seem to have done it, as these exascale systems are coming online. The solution, I’ll only venture to say, is essentially a major advance on the networking side. The level of sustained bandwidth between all these nodes and units is staggering.

Making exascale accessible

While even in 2007 you could predict that we’d eventually reach such low-power processes and improved memory bandwidth, other trends would have been nearly impossible to predict — for example, the exploding demand for AI and machine learning. Back then it wasn’t even a consideration, and now it would be folly to create any kind of high performance computing system that wasn’t at least partially optimized for machine learning problems.

“By 2023 we expect AI workloads to be a third of the overall HPC server market,” said Damkroger. “This AI-HPC convergence is bringing those two workloads together to solve problems faster and provide greater insight.”

To that end the architecture of the Aurora system is built to be flexible while retaining the ability to accelerate certain common operations, for instance the type of matrix calculations that make up a great deal of certain machine learning tasks.

“But it’s not just about performance, it has to be about programmability,” she continued. “One of the big challenges of an exacale machine is being able to write software to use that machine. oneAPI is going to be a unified programming model — it’s based on an open standard of Open Parallel C++, and that’s key for promoting use in the community.”

Summit, as of this writing the most powerful single computing system in the world, is very dissimilar to many of the systems developers are used working on. If the creators of a new supercomputer want it to have broad appeal, they need to bring it as close to being like a “normal” computer to operate as possible.

“It’s something of a challenge to bring x86-based packages to Summit,” Stevens noted. “The big advantage for us is that, because we have x86 nodes and Intel GPUs, this thing is basically going to run every piece of software that exists. It’ll run standard software, Linux software, literally millions of apps.”

I asked about the costs involved, since it’s something of a mystery with a system like this how that a half-billion dollar budget gets broken down. Really I just thought it would be interesting to know how much of it went to, say, RAM versus processing cores, or how many miles of wire they had to run. Though both Stevens and Damkroger declined to comment, the former did note that “the backlink bandwidth on this machine is many times the total of the entire internet, and that does cost something.” Make of that what you will.

Aurora, unlike its cousin El Capitan at Lawrence Livermore National Lab, will not be used for weapons development.

“Argonne is a science lab, and it’s open, not classified science,” said Stevens. “Our machine is a national user resource; We have people using it from all over the country. A large amount of time is allocated via a process that’s peer reviewed and priced to accommodate the most interesting projects. About two thirds is that, and the other third Department of Energy stuff, but still unclassified problems.”

Initial work will be in climate science, chemistry, and data science, with 15 teams between them signed up for major projects to be run on Aurora — details to be announced soon.

Powered by WPeMatico