Where are all these AI chips coming from?

AI



Blink, and the silicon industry changes. I thought we knew what kinds of microprocessor we needed, and the silicon chip makers were just getting on with making them faster, cheaper and better. But now, I’m seeing reports of a whole bunch of so-called “AI chips”. So what’s happening, and why?

The birth of AI chips is a strange thing, because they are not coming from the usual places: cloud providers like Google and Alibaba, telecoms firm Huawei, and tiny start-ups like GraphCore. IBM is making AI chips, after it seemed to be scaling down its silicon work. And even Intel, when it got involved, bought a start-up called Nervana, and then teamed up with Facebook.

It’s also come just after system developments in the data center and elsewhere seemed to be already providing everything needed by AI, and conventional hardware was bringing new developments along to meet further needs.

AI and machine learning has always been a software discipline. The first generations of computers automated calculations, and when they got more powerful, software running on them began to simulate things more like the way human brains work. IBM’s Deep Blue chess playing computer, for instance, and more recently the Watson system.

A new type of processor did come along: the “graphics processing unit” (GPU) which performed image processing fast through direct access to memory. Gaming made these chips widespread and cheap; then it turned out that they were great for AI and machine learning applications.

Most cloud based high performance computing and AI services are heavy users of GPUs, and NVIDIA has ridden this boom to become a giant. But apparently, GPUs are not enough. Why do we need these new, even more specialised chips?

The basic answer, according to Andy Patrizio at Ars Technica, is that artificial intelligence, and pattern-recognition,which forms a large part of it, doesn’t need the same kind of precision mathematics, that CPUs (and GPUs) have evolved to perform.

In his own words: “AI for self-driving cars, doesn’t use deterministic physics to determine the path of other things in its environment. It’s merely using previous experience to say this other car is here traveling this way, and all other times I observed such a vehicle, it travelled this way.”

AI calculations only need “single precision” calculations, not sums carried to the final decimal point: “So while CPUs and GPUs can both do it very well, they are in fact overkill for the task. A single-precision chip can do the work and do it in a much smaller, lower power footprint.”

That’s why Google has TPUs, Intel and Facebook have Nervana, and all the other AI chips are appearing. But this is not simple.

The processors created by cloud giants will be deployed within their data centers, and may not be visible to partners and customers (except possibly through programmatic interfaces). And the other chips will find their way into a range of different places, including specialist processors for intelligent vehicles, to tiny AI edge processors for devices like VR headsets.

There will still be a huge mass of x86 chips out there. But a wave of change is coming…


Written by Peter Judge (Guest)

See Peter Judge (Guest)'s blog

Peter Judge is the Global Editor at Datacenter Dynamics. His main interests are networking, security, mobility and cloud. You can follow Peter at: @judgecorp

Related blogs

Iceland provides the power behind Germany's most pioneering AI start-ups

This week has seen the announcement of Analytic Engineering, a pioneering German AI engineering firm, choosing Verne Global’s data center in Iceland as the location for their intensive computing. This represents another impressive AI and Machine Learning client win for us, following DeepL joining us just before Christmas.

Read more


What to look out for at SC18

SC: The big show with an international HPC audience celebrates its 30th year in 2018. It’s the World Cup of supercomputing and now it’s more than “just” supercomputing. Advancements in data analytics have topics like artificial intelligence (AI), including machine learning and deep learning, as stars of the show. Here's what I am looking forward to seeing in Dallas...

Read more


Wolf in Open Source clothing

In an interesting Medium Article, Andrew Leonard wrote about how Amazon may be starting to compete with some of its Open Source software partners. Andrew’s article delved into the specifics of the case involving Elastic and their Elasticsearch open source software. Elastic has been happy to offer Elasticsearch in its Open Source form on the AWS platform, and many customers were happy to consume Elastic’s capabilities that way.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.