Where are all these AI chips coming from?

AI



Blink, and the silicon industry changes. I thought we knew what kinds of microprocessor we needed, and the silicon chip makers were just getting on with making them faster, cheaper and better. But now, I’m seeing reports of a whole bunch of so-called “AI chips”. So what’s happening, and why?

The birth of AI chips is a strange thing, because they are not coming from the usual places: cloud providers like Google and Alibaba, telecoms firm Huawei, and tiny start-ups like GraphCore. IBM is making AI chips, after it seemed to be scaling down its silicon work. And even Intel, when it got involved, bought a start-up called Nervana, and then teamed up with Facebook.

It’s also come just after system developments in the data center and elsewhere seemed to be already providing everything needed by AI, and conventional hardware was bringing new developments along to meet further needs.

AI and machine learning has always been a software discipline. The first generations of computers automated calculations, and when they got more powerful, software running on them began to simulate things more like the way human brains work. IBM’s Deep Blue chess playing computer, for instance, and more recently the Watson system.

A new type of processor did come along: the “graphics processing unit” (GPU) which performed image processing fast through direct access to memory. Gaming made these chips widespread and cheap; then it turned out that they were great for AI and machine learning applications.

Most cloud based high performance computing and AI services are heavy users of GPUs, and NVIDIA has ridden this boom to become a giant. But apparently, GPUs are not enough. Why do we need these new, even more specialised chips?

The basic answer, according to Andy Patrizio at Ars Technica, is that artificial intelligence, and pattern-recognition,which forms a large part of it, doesn’t need the same kind of precision mathematics, that CPUs (and GPUs) have evolved to perform.

In his own words: “AI for self-driving cars, doesn’t use deterministic physics to determine the path of other things in its environment. It’s merely using previous experience to say this other car is here traveling this way, and all other times I observed such a vehicle, it travelled this way.”

AI calculations only need “single precision” calculations, not sums carried to the final decimal point: “So while CPUs and GPUs can both do it very well, they are in fact overkill for the task. A single-precision chip can do the work and do it in a much smaller, lower power footprint.”

That’s why Google has TPUs, Intel and Facebook have Nervana, and all the other AI chips are appearing. But this is not simple.

The processors created by cloud giants will be deployed within their data centers, and may not be visible to partners and customers (except possibly through programmatic interfaces). And the other chips will find their way into a range of different places, including specialist processors for intelligent vehicles, to tiny AI edge processors for devices like VR headsets.

There will still be a huge mass of x86 chips out there. But a wave of change is coming…


Written by Peter Judge (Guest)

See Peter Judge (Guest)'s blog

Peter Judge is the Global Editor at Datacenter Dynamics. His main interests are networking, security, mobility and cloud. You can follow Peter at: @judgecorp

Related blogs

Iceland provides the power behind Germany's most pioneering AI start-ups

This week has seen the announcement of Analytic Engineering, a pioneering German AI engineering firm, choosing Verne Global’s data center in Iceland as the location for their intensive computing. This represents another impressive AI and Machine Learning client win for us, following DeepL joining us just before Christmas.

Read more


AI is a great opportunity, but banks must innovate faster

The impact of AI on the lives of consumers and the operation of businesses is slowly growing. Whether it’s the increasing visibility of autonomous vehicles or the small conveniences of a voice assistant such as Amazon’s Alexa, we’re beginning to get a sense of what AI can do. However, we’re still at the beginning. The truly significant changes are yet to come.

Read more


AI needs to be ‘explainable’ but is that possible?

Amazon's AI made the news early in October after it was revealed that the company had scrapped a recruitment engine because it was 'sexist'. Private Eye, the UK's satirical news magazine, described it as "a reminder to take an extra big pinch of salt whenever you hear that AI will improve the world". However, the reality is more complicated...

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.