The Stack discusses OpenStack’s progress over the past year and its future outlook with Board Member Simon Anderson, CEO at DreamHost, and VP of Cloud, Jonathan LaCour.

As we approach the end of the year, what have you seen change or progress for OpenStack over 2014?

Simon: From a board level and technical perspective, the big thing for me in 2014 was the big push around Win the Enterprise. There was a big focus on building relationships with key corporate sponsors, and developing the code base for both existing projects and for new projects, which would harden OpenStack for true enterprise consumption. This covered aspects such as the upgradeability between versions. Since prior iterations there have been a lot of evolution of some of the specific projects and the code base, and it has been a key area of focus to ensure a smooth upgrade path between versions of OpenStack.

Another area of progress is usability. We want to create an increased ability for enterprises to choose their storage subsystems of choice, and their networking vendors of choisimonanderson[1]ce with whom they may have existing relationships with. This push has been in terms of trying to advance more quickly with the plug-in model of OpenStack. Usability was definitely a big area in 2014, and I think a lot of progress was made with the most recent release. I see this trend continuing into 2015 with the next release.

I don’t speak for the board as a whole, but the other big focus for me was certainly the redefinition of the core of OpenStack. When the foundation was formed, there were only really four major projects that were part of OpenStack, including Nova the compute project, and Swift, the object storage project. At the time in order to be able to call your project a distribution of OpenStack, or as a service provider that you are running an OpenStack cloud, you simply had to reference it or be using the code for Nova and Swift. We realised that that wasn’t an effective template, so the DefCore Committee was formed and it has been patiently working through a new framework for defining OpenStack.

Redefinition may sound administrative, but it’s actually a very important process for the whole community to go through. A renewed definition will enable all participants in the community to really get out there and build products around OpenStack, and also to use OpenStack in a way in which everyone feels like it is a level playing field. A lot of progress was made in 2014 in this respect, and a lot of community consultation took place around the topic. We are currently at the final step of the redefining of OpenStack, which is a membership vote at the end of the year to approve a new framework. It has been a very significant effort to make OpenStack even more flexible and a lot more current in terms of all the existing code and projects.

Of course there’s also the ongoing growth of the community and its adoption around the world. The project metrics continue to grow in the right direction in terms of contributors and adoption. I think over the last cycle, over 1,400 developers had contributed to the latest release. We see this number increase by 30, 40, 50% with each release cycle.

lacour1[1]Jonathan: I’ve been to every Open Stack Summit since the Diablo release, and this last summit was definitely the most productive. It has been very interesting to see OpenStack go through its continued transition. In the early days it was all about parity, achieving stability, and creating new projects that are competitive with what the market is demanding. Now we are really transitioning into a phase where projects are starting a more collaborative approach to OpenStack, working together to make the entire platform more cohesive. This has been a major focus in the OpenStack community over the last year, and it was quite common to see projects come together around new initiatives.

One of the best examples is the NFV group which has spun up in the OpenStack ecosystem. It’s collaboration between several different projects, between networking groups and compute groups, getting together and working on this emerging use case. It’s been really exciting to see.

A lot of the storage and compute teams are working together more closely. For example, we have seen things like Ceph taking hold, with users gaining distinctive advantage if they’re building on top of these platforms.

The other trend I started to see was a trend back toward open source, which is a great community. This contrasts with the early days of OpenStack when it was about vendors coming in with proprietary plug-ins and back ends, and capitalising on the explosion around OpenStack.

In 2015, I think these areas will continue to develop. Collaboration between projects will take hold and bear their fruit with compelling use cases. We will also see more and more people contributing open source projects into the ecosystem on the periphery.

Where do you see real momentum for OpenStack kicking off? In which areas do you see OpenStack becoming the top choice?

Jonathan: In many ways OpenStack is already winning battles. Private cloud is the use case that OpenStack is particularly dominating. There are a lot of traditional players in the virtualisation space like EMC and VMware coming in and being a part of that movement, which shows a shift toward OpenStack in private data centres.

The public cloud hasn’t really been that dominant a use case for OpenStack, but I think there are good reasons for that. DreamHost is a public cloud provider and we’ve soldered quite a bit of our load with HP and Rackspace. We go out and build our public cloud on OpenStack to really fit the use case. I think we’re more or less there now, but there’s obviously still work to do.

I think public cloud still has really big potential. I like the idea of consumer choice and developers having the choice of picking whatever public cloud they want. Maybe they’re using a private cloud in their day jobs, and they want to tinker or hack away at a new project – wouldn’t it be great if they could use the same API. The public cloud case has still got a good chance to emerge strong in 2015.

Simon: Transforming an IT ecosystem is very complex, and multi-faceted. Before, enterprise and even service providers only really had a few choices in terms of networking, and all of them were proprietary and had their own operating systems. This is one trend area where we’re seeing rapid change happening.

We’re now utilising Linux servers as networking devices with specialised software, and our systems admin who are very familiar with managing Linux servers can administer them and tie them into our monitoring. It takes time to shift that kind of approach to hardware as well, much like what happened 10 years ago with servers and Linux starting to really take off in the server operating system world for big production environments. I think we’re seeing the same kind of thing now and OpenStack is playing a role because without it the impetus is not there because you have to rely on the traditional proprietary vendor stack.

OpenStack gives the opportunity to implement a very large private cloud environment, for example in a bank or for an insurance company. They have a choice of a vast majority of hardware and software in that environment without proprietary lock-in. It is open, extensible and manageable – and not just in terms of cost, but in terms of the true DevOps vision – being able to tie it into your own monitoring systems, and to add all sorts of custom automation on top depending on your particular use case.

I think it’s pretty exciting in the world of IT at this point. With Akanda, our work virtualisation start-up for cloud service providers and for enterprise,which we spun off a couple of months ago, we’re seeing a lot of very interesting discussions with traditional companies that you would not expect to be so aggressive in wanting to implement a fully open stack. They want to setup fully open stacks of hardware and software for very large-scale implementations within their environment. These engagements that we’re having privately with CIOs and CTOs of major corporations definitely can be seen as part of that overall picture and shift.

Regarding stability and reliability over the course of the coming months, what is OpenStack’s progress on these variables and what obstacles are there left to face?

Jonathan: In my view there are always obstacles when it comes to stability and performance. There’s never been a release of any piece of software that didn’t claim somewhere in its release notes that it ‘improves stability and performance.’ It is something that needs to be consistently focused on.

I think as OpenStack use cases go up in scale, these variables are going to continue to be relevant issues for us. I would say now there are fewer and fewer new projects joining the core of OpenStack and becoming required bits and pieces of the full cloud deployment puzzle and that’s a good thing. Each of these projects which have emerged over the last four or five cycles have had a chance to catch up with the stability and the hardened nature of some of the older projects. This is a very natural thing and it’s not unique to open source or to OpenStack.

We’re really at a point now, especially in the Juno and the Kilo cycles, where we’ll see a lot more stability and better performance than we’ve seen in the past. It’s going to be a continued issue; we’re always going to want releases to perform better in able to serve more use cases. If you’ve got higher scale and quicker network creation, if you’re able to spin up services and destroy them at will and be able to scale and handle that, that allows you to build on top of your infrastructure a lot more rapidly and do more interesting and innovative things. We’re going to continue to see that trend happen and it’s getting better every day – I’m hopeful about this next cycle, especially.

DreamCompute is a good use case. It’s DreamHost’s public cloud – it is built on OpenStack and Ceph, which provides the storage architecture underneath, and Akanda, which is our open source network virtualisation platform. We started a couple of years ago on this project and we were adopting very early versions of OpenStack, Diablo and Essex through to Folsom and all the way up to the present day. We are currently in the process of migrating from H to I, Havana to Icehouse, and then rapidly moving from Icehouse to Juno.

We originally paused at Havana because we felt like we had achieved the stability we needed to harden all the rest of the glue pieces around DreamCompute to really get it to perform well. We do continual testing on our own cloud, not just for performance metrics but also to monitor the system. We get daily emails and we have a dashboard to show the percentage of tests that have failed and succeeded, to see how fast our operations are happening, and how many operations at once are happening. I think the H cycle was really the turning point, Havana was the point at which things were scalable and stable enough that you could build out fairly large-scale things. Icehouse has moved that forward even further, and Juno has as well.

Moving into Icehouse at the moment, we’re seeing a lot of our performance metrics get better and we’re seeing the scale improve. As we move into the Juno release we’re going to see the same thing. DreamCompute has benefitted greatly from increased stability and performance and that’s enabled us to get into a public dataspace now where people can sign up on our website and start taking advantage of a high performance OpenStack-built public cloud, built entirely on open source software.

Analysts have suggested that should big money not flow into OpenStack the whole movement could implode. Do you agree that this is a challenge for OpenStack?

Simon: It’s interesting, I guess any new and disruptive technology always faces that game at every turn. We can’t predict the future, and it is possible that if circumstances align that something bad may happen but the main thing I look at is the data that I know and see.

Nebula was one of the founding members of the OpenStack foundation with a lot of rock star programmers who had been a part of creating Nova while at NASA. It had been a member of the Open foundation for a long time, but as a start-up was paying half a million dollars a year to be a platinum supporter and that didn’t make sense, so they decided to step down and take a gold level position. As the chair of the Gold Member and Platinum Member Committee, I assist people to apply for corporate membership. We have had intense interest in that open platinum spot. We had five leading global companies apply for it – Cisco, Huawei, Intel, EMC, and Ericsson. They all put in a lot of effort to make their application before the board and ultimately Intel was successful in securing that slot.

These decisions are not made in a vacuum – they’re decisions that are made with a lot of forward-thinking around what sort of investment these companies are going to put into OpenStack. Many large global tech companies have said publicly that they see OpenStack as a core part of their future, and five/ten-year plans. They’re moving very quickly to ensure that they’re technically utilising and contributing to OpenStack.

I think the value and benefit of this open platform and ecosystem is strategic to so many corporations that I just don’t see the doomsday scenario happening. After all, the beauty of it is that it is software. I think sometimes people think that we are smashing protons to figure out what additional particles are out there – but we’re not, it’s complex but it is software. We are trying to remove legacy and the slowness of product development and management that happens between hardware and software. That’s the big thing that the world is demanding right now.

The model of a single company holding the keys and its development team evolving a product roadmap to all the use cases and all the performance requirements in a world of exploding compute utilisation, mobile and cloud, just doesn’t add up. To have one company as the gatekeeper for the product roadmap, even if they have a thousand developers, doesn’t make sense. It’s just not the right model for being able to serve this massive demand for utility compute and storage.

I think that OpenStack is here to stay and it will evolve in many different directions but that’s just part of the model – everyone has a voice in it. With the technical committee, there’s obviously a lot of rigour in determining what’s next and how particular projects will evolve, but it is a newer model. Open source initiatives have a collaborative approach to big software challenges, and that’s been around for a while, but I think OpenStack takes it to a new level in terms of the number of participants and how to integrate all of those voices and views and make it all work together.

I don’t have any doubt that if there are some large OpenStack failures, mainly technical failures, there’ll be thousands of developers, some of the best developers in the world, who will point their collective intelligence at that problem and they’ll fix it – that’s the difference. It won’t be because someone is telling them to fix it, it will be because it’s just part of their job descriptions. A lot of programmers are very excited to be working on this project because it gives them a certain amount of creative freedom that other proprietary organisations just don’t seem to have.