EMC Forum 2013 – Sydney – Part 3

Disclaimer: I recently attended the EMC Forum 2013 – Sydney.  My flights and accommodation were paid for by myself, however EMC provided meals and some swag. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.


Part 3

In this post I’d like to touch briefly on a few more of the sessions I went to and point you in the direction of some further reading. I’m working on some more detailed content for the near future.


ViPR Software Defined Storage: Evolving to Software-Defined Data Centre

Danny Elmarji is NSW SE Manager for EMC ANZ, while David Lloyd is Advisory SE, Application Virtualisation Solutions for EMC ANZ. This was the first session after lunch and hence a likely candidate for well-fed, snoring IT guy. Fortunately, Danny and David are both quite adept at keeping the audience engaged and awake during their presentations. Danny started the session off by using a music listening analogy, demonstrating how his own expectations for music listening had changed over time (moving from LP to Walkman to iPod to Spotify). This was a neat way to lead into a discussion on how our expectations as consumers of IT had changed via evolved technology.

Virtualisation showed us the way to increased agility and reduced operating costs from an infrastructure perspective. The real value came with the ability to provision a server in minutes, not weeks. The evolution has now moved us to cloud. Danny then asks how many people feel that their company has delivered on ITaaS. Not many hands go up, despite research showing that ANZ is big on utility, virtualisation and cloud. I think the difference here is that he asked whether we felt we’ve really delivered, rather than just adopted the technology. They think that compute has got it right, but the rest of the data centre (network and storage) aren’t really there yet.

So what do we do? We’ve created an abstraction layer but it needs to be for the entire data centre, not just compute. We take the smarts from these devices and use that to create better automation and orchestration processes for those resources to use them more efficiently (lowering costs and increasing agility). Enter the idea of the software-defined data centre (SDDC). Abstract all of the resources, pool them, and then automate and orchestrate them.

Software-defined storage needs to be simple – easy to manage, easy to provision, centralised, and well-integrated / extensible. We don’t want to invest in new frames / platforms to leverage SDS. It needs to be open as well. Danny then introduces Dave to talk more about ViPR (aka that product with the really cool logo).

ViPR has two functions within it; one being the data service and the other being the control service. The control service is about centralised, simplified management. Data services is about enabling capabilities and technologies on existing storage platforms.

It needs to be simple – you don’t want to introduce a new capability and then have to invest twice the resources to keep it running. To get ViPR ready for operation, it discovers the “storage ecosystem”, and then you create the “virtual array” – the collection of connectivity attributes. You then create virtual storage pools or classes of storage service. These storage pools are designed to leverage the capabilities of the existing storage assets, not diminish or mask them. Then we can look at how it’s consumed – via a service catalogue. ViPR can leverage your enterprise service catalogue and vice versa.

David then runs through a demonstration on how the consumption of ViPR storage looks to the business user. The point here is that automation and orchestration can reduce the risk of something blowing up because of miscommunication or misunderstandings between teams. EMC’s storage best practices are also built into these controls.

It’s not just another automation and provisioning tool though. You look at everything in a centralised fashion from the perspective of logical capability, rather than discrete units of capacity and performance.

It’s also open, with the goal being that the extensibility will provide the opportunity to incorporate third-party arrays as well. Think of it in the same way as you might a driver system for third-party peripherals on Windows. Out of the box it will support all EMC hardware, including XtremIO, as well as NetApp vFilers, for example. It’s also not necessarily about changing everything to run the EMC way – you probably already have your own service catalogue and enterprise portal. EMC claim that there are integration points with these (VMware vCAC for example could use ViPR to deliver storage services). Out of the box there’ll be support for VMware, Microsoft and openstack.

David provides and example to Danny of the lifecycle of a video file, and how ViPR, by being in the data path, allows you to switch the context of the file, say from file to object, depending on the requirement you have at the time, rather than having to copy and convert the file each time. You can also do this through a number of standards – eg S3, Atmos, Centera, etc – you’re not locked in.

Danny then steps in to summarise ViPR from a business point of view. Abstract resources in the storage ecosystem, reduce management costs, pool them and orchestrate them in order to create increased flexibility within the business. The mantra is keep it simple, extensible and open.

Next Generation Unified Storage: VNX Re-defines Midrange Price and Performance

Martin Milthorpe is Director for Mid-Tier Storage, EMC ANZ and clearly a switched on fellow. While The Register was talking about a potential “VNX2” launch next month, and John Roese let slip during the keynote about a mid-range refresh coming in the next few weeks, Martin was left with the somewhat dubious distinction of having to present about a platform that pretty much everyone knows about and no one was particularly interested in. I had no problem with Martin’s skills as a presenter, nor do I have a problem with what he presented. I do, however, have a problem with the naming of the session – which can more likely be attributed to over-enthusiastic marketing folks than Martin. There was one or two slides devoted to MCx and that was it for “next-generation”. I know that EMC have had issues with timing, and it would be weird to do a Forum without really talking about the VNX. I also know there were quite a few people in the audience who were unfamiliar with the VNX, so this was a good presentation for them. I sometimes forget that this was Forum, and not a technical deep-dive or roadmap session with my local SE. So there you go. Not quite the session I expected. Should have gone to the second ViPR session.



Unfortunately I then had to head to the airport to catch my flight back to Brisbane and wasn’t able to stay for the last sessions or the cocktail party. Nonetheless, I was very happy with what I’d picked up during the day, and immensely enjoyed catching up with some old friends from EMC and other companies.



As an attendee at the event I was given a shopping bag, iQ3 water bottle, some pens, tecala flip-flops and a Logicalis bottle-opener and squishy dice. Here’s a picture.

EMC Forum 2013 swag

I also entered the twitter competition and picked up 2 $20 Pre-paid Visa cards. Wheee!



EMC Forum 2013 – Sydney – Part 2

Disclaimer: I recently attended the EMC Forum 2013 – Sydney.  My flights and accommodation were paid for by myself, however EMC provided meals and some swag. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.


Part 2

In this post I’d like to touch briefly on a few of the sessions I went to and point you in the direction of some further reading. I’m working on some more detailed content for the near future.

Backup and Recovery – a Peak under the Hood of a Service Provider: Telstra

Kevin Kennedy is Director of Cloud Engineering at Telstra, and spent some time setting the scene as to why cloud is a good option and how cloud could be used as both a DR and backup / recovery solution. He suggested that we, in the IT industry, should be asking ourselves whether we’re more about the technology or the information. It’s a problem that exists in a number of IT departments, and one that needs to be revisited regularly to ensure that IT is really adding value to the business. As this session was a little more business-focused than technical, a fair bit of time was spent on ideas like agility and why cloud is good. Nonetheless, brief mention was made of VMware SRM and EMC RecoverPoint. It was nice to see that, at least on the surface, Telstra aren’t doing anything too earth-shatteringly different to most Iaas / BaaS providers.


How Cisco IT Delivers Next Generation Cloud Services

Mike Paranihi is Director of IT and Service Owner of DC Facilities and DCaaS at Cisco. Mike’s was one of the more entertaining presentations of the day (possibly because I’m nuts deep in cloud transformation projects at the moment), and I suggest if you have a chance to listen to him or meet with him that you take the opportunity. Mike is responsible for 34 facilities across the globe. By the numbers, Cisco IT supports over 300 locations in over 165 countries with over 65000 employees. 80% of their compute is virtualised and they’re aiming to get that to 95%. Use VMAX and VNX for T1 and T3 storage, Atmos for cloud storage, Vblocks and, overall, they use 96 EMC arrays with 21PB of raw storage (13PB allocatable).

Challenges? They’re always getting asked to cut the budget, do more with less and continue to maintain or improve service levels. The also need to be able to service internal clients who have evolving and rapidly changing resource requirements. So they created CITEIS (Cisco IT Elastic Infrastructure Service) about 4 years ago. Mike then went on to clarify the taxonomy he was using (DCaaS – facilities; IaaS – compute / storage / network; PaaS – the platform sitting on top; and SaaS – software, eg salesforce.com). CITEIS is predominantly an IaaS and PaaS play for Cisco IT.

You need unified fabric, which enables unified compute (use VMware internally) – this is Generation 1. They implemented process automation and self-service opportunities. Starting to investigate how they can cloud-burst, and begin to leverage capacity from other cloud providers. The portal is key to the success of this with the end-user (without the end-user there is no point – something we forget from time to time). Process automation (in CITEIS v5) now works at the hypervisor layer with both VMware and openstack.

The right security models are key to good multi-tenant deployments. Cisco heavily leverages the Nexus 7k VRFs, trusted zones and large subnets for this. Provide granular L2 security with VSGs. Primarily using EMC NAS for CITEIS storage.

Interestingly, they offer internal clients an “Express” option, that comprises 2 VMs for 90 days for use in PoCs and so forth. They can then choose to move those VMs into a larger vDC (virtual data centre) if required (minimum of 3 months consumption, guaranteed resources). vDCs are elastic and built via catalogue. Managed and self-managed options are also available. The big benefit of this approach for Cisco has been a reduction in provisioning time from 6 – 8 weeks down to about 3 – 5 days. They would like to get this down to 15 minutes but, ironically, are still working through issues with network management controls and some security procedures.

If you want to do PaaS well, you really need to understand exactly what services you want to offer via the service catalogue. You also need to understand which ones are high-frequency, and what are the foundational elements of these offerings in order to properly provision the underlying infrastructure. Cisco created IT organisations that revolved around service delivery, not function delivery. They’ve been benchmarking the TCO of their PaaS offering internally and are seeing real cost savings. Working towards having everything as a service available.

Also investing heavily in programmable infrastructure. Also now looking at how to leverage private / public / hybrid cloud while considering multi-tenancy and security requirements. The next step is application driven, where applications give instructions, based on SLAs and business rules, to the PaaS and IaaS layers to provision suitable bandwidth / compute / IOPS (eg an app getting smashed at end of quarter).


In Part 3 I’ll be looking at the ViPR and VNX sessions I attended.


EMC Forum 2013 – Sydney – Part 1

Disclaimer: I recently attended the EMC Forum 2013 – Sydney.  My flights and accommodation were paid for by myself, however EMC provided meals and some swag. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.


Rather than give you an edited transcript of the sessions I attended, I thought it would be easier if I pointed out some of the highlights. I hope to do some more detailed posts in the near future. This is the first time I’ve been to an EMC Forum since I started my blog, so it was a bit different to listen to what was going on in terms of things that would be blog-worthy. If it comes across as a bit of propaganda from EMC, well, it was their show. There was some really good information presented on the day and I don’t think I could do it justice in one post. And the ViPR logo is still one of the coolest logos for tech stuff that I’ve seen in a while. Note also that there was a metric shit-tonne of buzzwords used in this keynote, so bear with me. That’s an observation, not a criticism. I know the keynote has to appeal to both techs and suits (and those in between). Still, it was a lot.

Part 1 


Shaun McLagan – General Manager, RSA ANZ – was the MC for the keynote. He started by talking about the fact that the EMC Forum has been running in ANZ for the last 10 years. EMC shares have risen 281% in that time. There’s also been lots of change in that time. We should think about the power of change and what that means for us.

Alister Dias – VP and MD, EMC ANZ started off by talking about four key trends: mobile; cloud; big data; and social. EMC is apparently laser-focused on cloud and big data. These four trends underpin the shift to what they’re calling the “third platform”. The first platform being the mainframe, the second being client-server. The third platform is everywhere, and is driven by mobile. New order of magnitude with billions of users and hundreds of millions of applications. Pivotal was launched to take advantage of the third platform. EMC is growing, this is their 15th consecutive quarter of growth. Customers are focused on driving efficiency and reducing operational costs. IT is still too heavily oriented towards maintenance, not innovation.

Not all workloads are equal. This variety is driving EMC’s infrastructure innovation. Customers are starting to use private clouds, virtual private hosted and public cloud solutions. ANZ are early adopters, and thus more aggressive in shifting to these models. EMC is trying to leverage the software-defined (?) network and storage stack in the same way as VMware did with server virtualisation / compute. ViPR is EMC’s software play that provides the ability to abstract, pool and automate storage. Not just EMC storage, anyone’s storage, even commodity storage. Universal remote control (!) to provide the ability to manage this storage from one point.

In the year 2000 we created 2000PB of information during the year. Today, we create 4000PB of information in a single day. This apparently where big data tools help us harness the information to provide better service and so forth. Rapid deployment of apps, accessed anywhere and at any time. New apps will need to not only leverage structured information, but also unstructured data. Telemetric sensors are also going to be useful sources of data. Cisco has been counting internet connections. At the start of the year it was around 8 billion. By the middle of July the number had risen to 10 billion. The internet of things feeding information back to the machine overlords (I may have made that bit up). Apparently, by the year 2020, 40% of all information generated will be from these sensors.

Pivotal is fundamentally designed to provide customers with the tools to take advantage of the third platform. GE bought 10% of the company, because they’re into that sort of thing. movideo apparently uses EMC’s big data platforms to analyse metrics of viewer details in real-time.

Cloud and big data are only going to work if EMC provides a “blanket of trust” to organisations, via RSA of course. With integrated backup and recovery and global high availability. Now is the time to separate the hype from reality *cough*.

Shaun then takes the stage to introduce John Roese – Senior VP and CTO, EMC. John starts off with a bit of his career history and why he’s a fan of the ANZ market (early adopters who exploit technology to its full extent). When EMC approached him, he said he wasn’t a storage guy. That’s okay, EMC’s in the information management business, the big data business, the virtualised infrastructure business, the security business, and so on.

What’s the purpose of infrastructure? To empower workloads (the applications, the data, the process). The problem with workloads is that they change and evolve. Second platform application growth will be about 70% between 2012 and 2016. A new class – social, mobile, collaborative – is emerging. In the same timeframe we’ll be looking at 700% growth in third platform workloads. Each workload also has a different expectation of infrastructure. If we build purpose-built infrastructure for every application, things can become complex. Third platform doesn’t replace second platform, although some workloads may move to third platform environments. Second platform IT builds infrastructure to scale for the number of employees accessing it. Third platform builds to scale to the number of customers. If we accept that the third platform is new, then we need to accept that appropriate infrastructure for this platform will be new as well. Converged and software-defined whatever will provide us with new tools to meet these requirements. We need to think differently about how we do things. We can’t backup an Exabyte of data across a network. Instead, we need to look at what we do with that data in the first place, how we create it in a resilient fashion. John then spent a little time using healthcare as an example of how the use of data and infrastructure are changing.

There are three things we can do to keep up with changing IT. Firstly, stop thinking of infrastructure as a collection of boxes. Think of infrastructure in the future as something that can deliver pools of capacity and pools of performance-optimised infrastructure. Every workload needs one or the other or both. Secondly, because infrastructure is complex, we need to be working on ways to abstract them and simplify it via software (like VMware did). Thirdly, IT can use the information it has available to provide value to the business, thus adding value to IT itself.

Today, when we process data, we move it to the application and the compute and let it do the processing. But wouldn’t it be simpler, when you have a PB or EB of data, to move the application to the data to do the processing? Apparently there wasn’t really a performance tier until recently, as the compute was the bottleneck, and the application demand wasn’t there. Is generally low-latency, high compute / storage affinity or locality, scales to TB, not PB (costs a lot), and is biased towards flash. In the future, you’ll have capacity and performance tiers in on-premise and off-premise configurations. The complexity comes from how well this all works together. Chris Evans did a nice article on EMC’s potential misunderstanding of what’s really going on over at his blog, while Martin Glassborow also did a good piece on his blog from a slightly different angle.

EMC have $4 billion a year to spend on R & D and M & A. They say they will lead in performance tier innovation. First to put flash in an array, and nowadays you can buy a full flash VNX. They have XtremSF and XtremIO. They also bought ScaleIO a few weeks ago. As far as EMC are concerned they are number one in the capacity tier. The key is going to be staying number one. Mentioned a major refresh to VNX is coming in a few weeks.

With all of this capability comes a large amount of complexity. This is where something like ViPR comes in – the software-defined storage solution. Storage becomes less about plumbing, and more about delivering a very specific experience for each of your application workloads. The industry got storage virtualisation wrong the first couple of times by putting a heavyweight software layer in between perfectly good storage and perfectly good applications. The virtualisation of provisioning, organisation and interaction needed to happen, rather than the virtualisation of the storage itself. The ViPR controller does all of the array resource management. ViPR also offers a data service that will (eventually) provide file / block / object / etc translation services as required.

John then goes on to talk about Pivotal for a little while (the third bit of advice on how to keep pace with changing IT requirements). It can be boiled down to four things:

  • Cloud abstraction layer;
  • Big data;
  • Fast data (real-time ingestion); and
  • Analytics.

John tells us that EMC believes in choice, which is why they’ve kept EMC, VMware and Pivotal separate (they can then sell us any of them or all of them). John then wraps things up and we move on to the EMC Galaxy Awards, presented by Shaun, John, Alister and Paul Harapin, VP APJ at VCE. There’s more about these on the ECN website.

Finally, the last part of the keynote is done by Michael McQueen – author, researcher and speaker – looking at what makes Generation Y tick. This was a fascinating and entertaining presentation, and I’d encourage you to check out his website and books.

And that was it for the keynote, a mere 2 hours later. I’ll be doing a post soon on some of the breakout sessions I attended.

*Update 16.08.2013 11:45am* I meant to link to Simon’s articles on The Register discussing John Roese’s position on capacity vs performance tiers – they are here and here.