HPC for Manufacturing
Digital innovation is the new imperative.
In today’s modern marketplace, the rise of digital technology and smart products has changed the product lifecycle. Products must not only get to market faster and at the right time, they must also evolve and improve to sustain value for both the designer/manufacturer and owner/operator. As a consequence, many manufacturers are challenged to keep pace. This means that the transformation is not limited just to product design and definition, but significantly impacts the way products are manufactured, assembled, delivered and maintained.
IT leadership is being asked to…
The vast amounts of data brought on by digital transformation efforts are creating compute-heavy workloads that require a higher level of power. This trend will only continue as products become smarter and more sophisticated. Meeting your own digital transformation needs will require a more robust compute solution.
Today, many organizations have on-premises high-performance computing infrastructure. Our Team has seen everything from a set of servers in an office closet to large datacenter environments, and everywhere in between, depending on what the company is trying to accomplish.
While on-premises HPC is a good option for many scenarios, it also comes with inherent constraints. These include:
- Huge up-front capital costs.
- Limited space for new hardware.
- Hardware becomes obsolete quickly.
- long lead times required for shared capacity.
- Operational costs and constraints.
- No scaling beyond your walls.
Today, the use of High Performance Compute is pervasive in discrete manufacturing organizations for designing and architecting new products, and for improving existing ones. From durability and stress analysis, to design automation and topology optimization, HPC is a critical element of the process.
There are two main HPC scenarios we see today.
- The first scenario is single large simulation jobs being run in a high-performance environment across multiple cores and nodes. This is often in a supercomputing infrastructure with lots of networked machines.
- The second scenario is multiple jobs or simulations being run on the same infrastructure, which can be hundreds or even thousands of small ones, that run across multiple cores or nodes.
This creates two types of allocation problems. First, how do you accommodate the engineer who wants to run a single, massive job, like a crash worthiness simulation, or area dynamics or fluid dynamics simulation. The second allocation challenge to balance is how do you accommodate dozens or hundreds of smaller jobs for simulations of various types such as topology optimization for 3D printing and so forth.
The convergence of physical and digital components in manufacturing has enabled a shift towards smarter, more connected products that are continuously iterated on. Innovation and iteration require more sophisticated product designs that must be quickly trained, simulated, and validated to achieve functional quality and safety goals. Standard simulations such as Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) can take hours to execute, leaving engineers with extended wait times to get results. They also need to integrate IoT data from existing products into simulations to derive new insights and speed up the validation process. By creating a virtual model of a physical product or production asset, manufacturers can use this digital twin to integrate IoT data and run numerous simulations, backed by the power of Azure High Performance Computing.
Azure HPC enables engineering teams to spin up workstations on the cloud to run complex simulations like CFD and FEA in parallel. Users can also run simulations using digital twins to compare parts, assemblies and sub assemblies, providing actionable performance insight for future product optimization and/or supply chain vendors.
With Azure HPC Simulation and Analysis, you can…
- Remove on-premises constraints by bursting to the cloud on demand to run large jobs in order to meet business goals.
- Run more compute-intensive simulations by managing, scheduling and scaling jobs to secure a better performance yield.
- Iterate and optimize product design by running rapid simulations in parallel to validate all product variants and versions with greater speed.
With the Azure HPC solution, users gain the flexibility of cloud-based HPC, which offers a number of advantages over on-premises infrastructure (previously mentioned).
Azure HPC provides on-demand high-performance computing resources, supporting large parallel and batch compute jobs. This means engineers don’t have to wait and wait – if demand spikes, they can easily take advantage of new resources, helping engineers get the results they need fast. It also allows users to extend their on-premises cluster to the cloud when they need more capacity – no need to scramble for new physical space. They can also run work entirely in the cloud when it makes sense. Lastly, users can scale easily, adding massive amounts of resources when needed, and closing them down when they don’t need them. There’s no need to worry about issues like maintenance, power, cooling or keeping more hardware current. And users only pay for what they need.
Azure provides the range of capabilities to support the widest variety of compute workloads, but moving HPC workloads into the cloud is not a single destination. Nobody jumps straight to “all cloud.” There’s a spectrum of possibilities, and the “right” answer varies by customer—or even by workload. Whether you’re moving a single application into Azure or going to a cloud-native paradigm, Azure has the resources you need.
If you’d like more information or would like to learn more about our Azure HPC services, please drop us a note below.