Wolf in Open Source clothing


In an interesting Medium Article, Andrew Leonard wrote about how Amazon may be starting to compete with some of its Open Source software partners. Andrew’s article delved into the specifics of the case involving Elastic and their Elasticsearch open source software. Elastic has been happy to offer Elasticsearch in its Open Source form on the AWS platform, and many customers were happy to consume Elastic’s capabilities that way.

Now comes the rub. Elastic, like any good commercially driven organisations working with Open Source, adjusted it’s model so that it began to charge for the premium components in its offering. There is nothing inherently wrong with this practice and it can be seen as positive for the Open Source movement as a thriving ecosystem of commercial offerings built on Open Source technology makes it easier for enterprise customers to adopt the technology and increases innovation. This is a sensible business approach; many early Open Source commercial models worked on only providing software support, which proved a hard way to grind out a business. If someone is providing genuine innovation, and they haven’t taken Open Source code under say GPL, then it is not unreasonable that they can charge for it.

On the opposite side of the argument, Open Source adherents would see charging for such technology as an anathema and against the principles of the Open Source movement. Interestingly or conveniently depending on your point of view this was the view that seems to be the view taken by Amazon. In what may be seen as a move to Open Source evangelism, Amazon deciding it didn’t like Elastic’s approach. Its response was to develop replacement versions of Elastic’s premium products and rent them those out on its platform - effectively killing Elastic’s cloud business.

AI and technology startups do often run the risk from building up too much reliance on the software, services and APIs from hyperscaler by not having the ability to also run either their software or business independently of them. The danger is that not only that they might get locked into a specific platform but also fail to develop their own genuine IP, as developers take the lazy route with pre-packaged services.

Often a great, safer and lower cost alternative to using the hyperscale clouds can be to look at the diverse market of focused cloud service providers. Many are committed to Open Source, and by the nature of their businesses offer open platforms built using technologies such as OpenStack. Doing so allows software companies to retain control and portability of their code, with the ability to deliver it as both a cloud offering and as an on-premises capability and not get locked into specific cloud and APIs. It also means your Developers will need to build real IP. This last point is vital; you are not a real AI company, and cannot justify AI valuations if you are consuming someone else’s AI via an API. An issue magnified with the advent of serverless computing, where there is more significant potential for API lock-in unless proper standardisation comes about. At present Amazon's Lambda has some integration with OpenAPI3.0, and Google is championing Knative which codifies best practices shared by successful real-world Kubernetes-based frameworks across the cloud and private/third party data centers.

At Verne Global, we think of ourselves as offering equal and fair commercial opportunity. Yes, we run a commercial business and this grows through offering excellent service to satisfied customers, but we also want and need our customers to succeed.

Written by Vasilis Kapsalis

See Vasilis Kapsalis's blog

Vas is Verne Global's Director of Deep Learning and HPC Solutions. He comes with a wealth of experience from the global technology sector, with detailed knowledge in Deep Learning, Big Data and HPC, as well as consultancy skills in IoT and digital transformation.

Related blogs

Iceland provides the power behind Germany's most pioneering AI start-ups

This week has seen the announcement of Analytic Engineering, a pioneering German AI engineering firm, choosing Verne Global’s data center in Iceland as the location for their intensive computing. This represents another impressive AI and Machine Learning client win for us, following DeepL joining us just before Christmas.

Read more

Heavy metal and tag lines – Rumours from the trade show floor

On the cusp of spring I regularly refresh my GPU technology suntan at the Nvidia GPU Technology Conference (GTC) in San Jose. This year was fascinating as the speed and scale of both AI and Virtual Reality industries has leapt forward. Here are my takeaways...

Read more

Deep Learning at Scale

Deep learning is a current hot topic in HPC and I am sure it will be one of the key themes at SC18. In many cases, we have been hearing about artificial intelligence, machine learning, and deep learning discussed together, though in reality, machine learning is a type of AI, and deep learning is a subset of machine learning. In this article, we will try to best define deep learning and its industrial applications and, more specifically, the benefits of scale - from big data to high-performance computing - to the successful implementation of deep learning.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.