For decades, the IT market has been obsessed with the competition between suppliers of processors, but there are rivalries between the makers of networking chips and the full-blown switches that are based on them that are just as intense. Such a rivalry exists between the InfiniBand chips from Mellanox Technologies and the Omni-Path chips from Intel, which are based on technologies Intel got six years ago when it acquired the InfiniBand business from QLogic for $125 million.Read more
Hyperconverged infrastructure in some ways is like the credit card in those old TV ads: in this case, it’s everywhere that enterprises want to be. HCI put compute and storage on the same cluster, tightly integrate them with networking and unified management tools and essentially give enterprises a private cloud for the datacenter as well as pushing compute out to the edges in a consistent manner.
HCI also promises a bunch of other things beneficial to enterprises, including streamlined management, lower costs, faster speeds, and easier scalability than traditional IT systems to better address the rise of cloud computing, analytics, …Read more
Supercomputers keep getting faster, but they are keep getting more expensive. This is a problem, and it is one that is going to eventually affect every kind of computer until we get a new technology that is not based on CMOS chips.
The general budget and some of the feeds and speeds are out thanks to the requests for proposal for the “Frontier” and “El Capitan” supercomputers that will eventually be built for Oak Ridge National Laboratory and Lawrence Livermore National Laboratory. So now is a good time to take a look at not just the historical performance of capability …Read more
Nvidia caused a shift in high-end computing more than a decade ago when it introduced its general-purpose GPUs and CUDA development platform to work with CPUs to increase the performance of compute-intensive workloads in HPC and other environments and drive greater energy efficiencies in datacenters.
Nvidia and to a lesser extent AMD, with its Radeon GPUs, took advantage of the growing demand for more speed and less power consumption to build out their portfolios of GPU accelerators and expand their use in a range of systems, to the point where in the last Top500 list of the world’s fastest …Read more
There is a direct correlation between the length of time that Nvidia co-founder and chief executive officer Jensen Huang speaks during the opening keynote of each GPU Technology Conference and the total addressable market of accelerated computing based on GPUs.
This stands to reason since the market for GPU compute is expanding. We won’t discuss which is the cause and which is the effect. Or maybe we will.
It all started with offloading the parallel chunks of HPC applications from CPUs to GPUs in the early 2000s in academia, which were then first used in production HPC centers a decade …Read more
Just because supercomputers are engineered to be far more powerful over time does not necessarily mean programmer productivity will follow the same curve. Added performance means more complexity, which for parallel programmers means working harder, especially when accelerators are thrown into the mix.
It was with all of this in mind that DARPA’s High Productivity Computing Systems (HPCS) project rolled out in 2009 to support higher performance but more usable HPC systems. This is the era that spawned systems like Blue Waters, for instance, and various co-design efforts from IBM and Intel to make parallelism within broader reach to the …Read more
We all remember learning to ride a bike. Those early wobbly moments with “experts” holding on to your seat while you furiously peddled and tugged away at the handlebars trying to find your own balance.
Training wheels were the obvious hardware choice for those unattended and slightly dangerous practice sessions. Training wheel hardware was often installed by your then “expert” in an attempt to avoid your almost inevitable trip to the ER. Eventually one day, often without planning you no longer needed the support, and you could make it all happen on your own.Read more
Today, AI and ML needs this …
Supercomputer makers have been on their exascale marks, and they have been getting ready, and now the US Department of Energy has just said “Go!”
The requests for proposal have been opened up for two more exascale systems, with a budget ranging from $800 million to $1.2 billion for a pair of machines to be installed at Oak Ridge National Laboratory and Lawrence Livermore National Laboratory and a possible sweetener of anywhere from $400 million to $600 million that, provided funding can be found, allows Argonne National Laboratory to also buy yet another exascale machine.
Oak Ridge, Argonne, and Livermore …Read more
Hard to believe, but the R programming language has been with us since 1993.
A quarter century has now passed since the authors Gentleman and Ihaka originally conceived the R platform as an implementation of the S programming language.
Continuous global software development has taken the original concepts originally inspired by John Chambers’ Scheme in 1975 to now include parallel computing, bioinformatics, social science and more recently complex AI and deep learning methods. Layers have been built on top of layers and today’s R looks nothing like 1990’s R.
So where are we at, especially with the emerging opportunities …Read more
The Open Compute Project (OCP) held its 9th annual US Summit recently, with 3,441 registered attendees this year. While that might seem small for a top-tier high tech event, there were also 80 exhibitors representing most of the cloud datacenter supply chain, plus a host of outstanding technical sessions. We are always on the hunt for new iron, and not surprisingly the most important gear we saw at OCP this year was related to compute acceleration.
Here is how that new iron we saw breaks down across the major trends in acceleration.
The first interesting thing we saw was a …Read more
In the long run, provided there are enough API pipes into the code, software as a service might be the most popular way to consume applications and systems software for all but the largest organizations that are running at such a scale that they can command almost as good prices for components as the public cloud intermediaries. The hassle of setting up and managing complex code is in a lot of cases larger than the volume pricing benefits of do it yourself. The difference can be a profit margin for both cloud builders and the software companies that peddle their …Read more
Workflow automation has been born of necessity and has evolved an increasingly sophisticated set of tools to manage the growing complexity of the automation itself.
The same theme keeps emerging across the broader spectrum of enterprise and research IT. For instance, we spoke recently about the need to profile software and algorithms when billions of events per iteration are generated from modern GPU systems. This is a similar challenge and fortunately, not all traditional or physical business processes fall into this scale bucket. Many are much less data intensive, but can have a such a critical impact in “time to …Read more
Amazon Web Services essentially sparked the public cloud race a dozen years ago when it first launched the Elastic Compute Cloud (EC2) service and then in short order the Simple Storage Service (S3), giving enterprises access to the large amount compute and storage resources that its giant retail business leaned on.
Since that time, AWS has grown rapidly in the number of services it offers, the number of customers it serves, the amount of money it brings in and the number of competitors – including Microsoft, IBM, Google, Alibaba, and Oracle – looking to chip away …Read more
At the GPU Technology Conference last week, we told you all about the new NVSwitch memory fabric interconnect that Nvidia has created to link multiple “Volta” GPUs together and that is at the heart of the DGX-2 system that the company has created to demonstrate its capabilities and to use on its internal Saturn V supercomputer at some point in the future.
Since the initial announcements, more details have been revealed by Nvidia about NVSwitch, including details of the chip itself and how it helps applications wring a lot more performance from the GPU accelerators.
Our first observation upon looking …Read more
There has been plenty of talk about where FPGA acceleration might fit into high performance computing but there are only a few testbeds and purpose-built clusters pushing this vision forward for scientific applications.
While we do not necessarily expect supercomputing centers to turn their backs on GPUs as the accelerator of choice in favor of FPGAs anytime in the foreseeable future. there is some smart investment happening in Europe and to a lesser extent, in the U.S. that takes advantage of recent hardware additions and high-level tool development that put field programmable devices within closer reach–even for centers whose users …Read more
Enterprises can see cost and efficiency benefits when they migrate workloads into the cloud, but such moves also come with their share of challenges in complexity and management.
This is particularly true as organizations embrace a compute environment that includes multiple clouds – both public and private – as well as one or more on-premises datacenters. True, the cloud enables businesses to easily scale up or down depending on the workloads they’re running, to pay for only the infrastructure they’re using rather than having to invest upfront in hardware, to put the onus of integration on the cloud providers, and …Read more
Ian Buck doesn’t just run the Tesla accelerated computing business at Nvidia, which is one of the company’s fastest-growing and most profitable products in its twenty five year history. The work that Buck and other researchers started at Stanford University in 2000 and then continued at Nvidia helped to transform a graphics card shader into a parallel compute engine that is helping to solve some of the world’s toughest simulation and machine learning problems.
The annual GPU Technology Conference was held by Nvidia last week, and we sat down and had a chat with Buck about a bunch of things …Read more
Bioinspired computing is nothing new but with the rise in mainstream interest in machine learning, these architectures and software frameworks are seeing fresh light. This is prompting a new wave of young companies that are cropping up to provide hardware, software, and management tools—something that has also spurred a new era of thinking about AI problems.
We most often think of these innovations happening at the server and datacenter level but more algorithmic work is being done (to suit better embedded hardware) to deploy comprehensive models on mobile devices that allow for long-term learning on single instances of object recognition …Read more
A major transformation is happening now as technological advancements and escalating volumes of diverse data drive change across all industries. Cutting-edge innovations are fueling digital transformation on a global scale, and organizations are leveraging faster, more powerful machines to operate more intelligently and effectively than ever.
Recently, Hewlett Packard Enterprise (HPE) has announced the new HPE Apollo 6500 Gen10 server, a groundbreaking platform designed to tackle the most compute-intensive high performance computing (HPC) and deep learning workloads. Deep learning – an exciting development in artificial intelligence (AI) – enables machines to solve highly complex problems quickly by autonomously analyzing …Read more
The Next Platform Weekly
- Welcome To The Next Platform
- Rockets Shake And Rattle, So SpaceX Rolls Homegrown CFD
- More Knights Landing Xeon Phi Secrets Unveiled
- The Tiny Chip That Could Disrupt Exascale Computing
- Inside an Evolving Genomics Cluster
- Flink Sparks Next Wave of Distributed Data Processing
- Tesla Compute Drives Nvidia Upwards
- Pivotal Opens Up More Of Its Platform
- Manufacturers Making Workstation To Cluster Leap