Sergey Nivens – stock.adobe.com
Arm has set out plans to cut its global datacentre footprint in size by 45% and reduce its use of on-premise compute resources by 80% by offloading some of its core compute tasks to the Amazon Web Services (AWS) cloud.


The British semi-conductor manufacturer is in the process of migrating the majority of its electronic design automation (EDA) workflows to the Amazon public cloud platform, and claims the progress it has made on this front so far has led to a 6x improvement in performance time for said workloads.
EDA is an important part of the semi-conductor development process and involves using software tools to design and analyse computer chips, and the workflows it generates include elements of front-end design, simulation, verification and data analysis.
“These highly iterative workflows traditionally take many months or even years to produce a new device, such as a system-on-a-chip and involve massive compute power,” said Arm and AWS in a statement announcing their technology tie-up.
It is intricate work as each chip is designed to deliver maximum performance in as small amount of space as possible, and can contain billions of transistors that need to be engineered down to a single-digit nanometer level.
Traditionally, Arm has run these computationally intensive workloads from on-premise datacentres, but is now in the process of switching up its processes so more of this type of work can be done in the AWS cloud.
“Semiconductor companies that run these workloads on-premise must constantly balance costs, schedules, and datacentre resources to advance multiple projects at the same time. As a result, they can face shortages of compute power that slow progress or bear the expense of maintaining idle compute capacity,” the statement continued.
As well as its EDA workloads, the company is also using the AWS cloud to collect, integrate and analyse the telemetry data it accrues to inform its design processes, which it claims will bring about improvements to the performance of its engineering teams and the organisation’s overall efficiency.
Specifically, Arm will be hosting these workloads in a variety of different Amazon Elastic Compute Cloud (EC2) instance types and will make use of the machine learning-based AWS Compute Optimiser service to decide which instances should run where.
is also drawing on the expertise of AWS partner Databricks to develop and run machine learning applications in Amazon EC2 that will enable it to process data gleaned from its engineering processes to improve the efficiency of their workflows too.
“Through our collaboration with AWS, we’ve focused on improving efficiencies and maximising throughput to give precious time back to our engineers to focus on innovation,” said Rene Haas, president of IP Products Group (IPG) at Arm.


“We’re optimising engineering workflows, reducing costs, and accelerating project timelines to deliver powerful results to our customers more quickly and cost effectively than ever before.”
Peter DeSantis, senior vice-president of global infrastructure and customer support at AWS, added: “AWS provides truly elastic high performance computing, unmatched network performance, and scalable storage that is required for the next generation of EDA workloads, and this is why we are so excited to collaborate with Arm to power their demanding EDA workloads running our high performance Arm-based Graviton2 processors.”
New technologies are widely-tipping to become a mainstay of datacentres in the future. Learn about new and emerging technologies that look set to shake-up the way datacentres are managed, monitored, powered and cooled as well.
CIO dashboards can be a vital tool for assessing metrics in real time to gain insight on IT performance and support better …
The business response to COVID-19 has accelerated technology adoption, making emerging technologies a more accessible and …
The Open Group is teaming up with a United Nations agency on best practices, guides and standards to show resource-strapped …
Companies looking to introduce security testing earlier into software development must look past myths and understand what to …
The lack of consistent updates (and the open source nature of the stacks) make the Amnesia:33 vulnerabilities difficult to fix as…
In his GitHub post, researcher Oskars Vegeris discussed Microsoft classifying the vulnerability as “Important” rather than “…
Network performance is a top issue among IT teams and remote workers amid the pandemic and can correlate with other technical …
The Apstra acquisition could help Juniper sell networking hardware and software to heterogeneous data centers and large-scale …
Network teams can avoid signal coverage issues by performing different wireless site surveys as they evaluate new spaces, set up …
Colocation facility costs can include anything from power fees and bandwidth service charges to connectivity expenses, change …
In any multi-tenant IT environment, noisy neighbors can be an issue. Here’s a closer look at how the challenges differ in the …
Use this data center selection checklist to make fair and comprehensive comparisons between colocation data center providers …
Raj Verma, CEO of SingleStore, explains why the vendor rebranded from MemSQL and how its platform is more than just an in-memory …
Collibra CEO discusses the importance of data governance for enterprises and how to tie data governance to business terminology …
The enterprise edition of the MySQL database is being enhanced on Oracle Cloud Infrastructure to enable users to run analytics …
All Rights Reserved, Copyright 2000 – 2020, TechTarget

Privacy Policy
Cookie Preferences
Do Not Sell My Personal Info

source

Categories: CloudSystemsTech