Kingston’s NVMe Line-up Is The Life Of The Party

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

You can view the video of Kingston‘s presentation at Tech Field Day Extra VMworld US 2017 here, and download a PDF copy of my rough notes from here.

 

It’s A Protocol, Not Media

NVMe has been around for a few years now, and some people get it confused for a new kind of media that they plug into their servers. But it’s not really, it’s just a standard specification for accessing Flash media via the PCI Express bus. There’re a bunch of reasons why you might choose to use NVMe instead of SAS, including lower latency and less CPU overhead. My favourite thing about it though is the plethora of form factors available to use. Kingston touched on these in their presentation at Tech Field Day Extra recently. You can get them in half-height, half-length (HHHL) add-in cards (AIC), U.2 (2.5″) and M.2 sizes. To give you an idea of the use cases for each of these, Kingston suggested the following applications:

  • HHHL (AIC) card
    • Server / DC applications
    • High-end workstations
  • U.2 (2.5″)
    • Direct-attached, server backplane, just a bunch of flash (JBOF)
    • White box and OEM-branded
  • M.2
    • Client applications
    • Notebooks, desktops, workstations
    • Specialised systems

 

It’s Pretty Fast

NVMe has proven to be pretty fast, and a number of companies are starting to develop products that leverage the protocol in an extremely efficient manner. Coupled with the rise of NVMe/F solutions and you’ve got some pretty cool stuff coming to market. The price is also becoming a lot more reasonable, with Kingston telling us that their DCP1000 NVMe HHHL comes in at around “$0.85 – $0.90 per GB at the moment”. It’s obviously not as cheap as things that spin at 7200RPM but the speed is mighty fine. Kingston also noted that the 2.5″ form factor would be hanging around for some time yet, as customers appreciated the serviceability of the form factor.

 

[Kingston DCU1000 – Image courtesy of Kingston]

 

This Stuff’s Everywhere

Flash media has been slowly but surely taking over the world for a little while now. The cost per GB is reducing (slowly, but surely), and the range of form factors means there’s something for everyone’s needs. Protocol advancements such as NVMe make things even easier, particularly at the high end of town. It’s also been interesting to see these “high end” solutions trickle down to affordable form factors such as PCIe add-in cards. With the relative ubiquity of operating system driver support, NVMe has become super accessible. The interesting thing to watch now is how we effectively leverage these advancements in protocol technologies. Will we use them to make interesting advances in platforms and data access? Or will we keep using the same software architectures we fell in love with 15 years ago (albeit with dramatically improved performance specifications)?

 

Conclusion and Further Reading

I’ll admit it took me a little while to come up with something to write about after the Kingston presentation. Not because I don’t like them or didn’t find their content interesting. Rather, I felt like I was heading down the path of delivering another corporate backgrounder coupled with speeds and feeds and I know they have better qualified people to deliver that messaging to you (if that’s what you’re into). Kingston do a whole range of memory-related products across a variety of focus areas. That’s all well and good but you probably already knew that. Instead, I thought I could focus a little on the magic behind the magic. The Flash era of storage has been absolutely fascinating to witness, and I think it’s only going to get more interesting over the next few years. If you’re into this kind of thing but need a more comprehensive primer on NVMe, I recommend you check out J Metz’s article on the Cisco blog. It’s a cracking yarn and enlightening to boot. Data Centre Journal also provide a thorough overview here.

Druva Is Useful, And Modern

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

You can view the video of Druva‘s presentation here, and you can download a PDF copy of my rough notes from here.

 

DMaaS

Druva have been around for a while, and I recently had the opportunity to hear from them at a Tech Field Day Extra event. They have combined their Phoenix and inSync products into a single platform, yielding Druva Cloud Platform. This is being positioned as a “Data Management-as-a-Service” offering.

 

Data Management-as-a-Service

Conceptually, it looks a little like this.

[image via Druva]

According to Druva, the solution takes into account all the good stuff, such as:

  • Protection;
  • Governance; and
  • Intelligence.

It works with both:

  • Local data sources (end points, branch offices, and DCs); and
  • Cloud data sources (such as IaaS, Cloud Applications, and PaaS).

The Druva cloud is powered by AWS, and provides, amongst other things:

  • Auto-tiering in the cloud (S3/S3IA/Glacier); and
  • Easy recovery to any location (servers or the cloud).

 

Just Because You Can Put A Cat …

With everything there’s a right way and a wrong way to do it. Sometimes you might do something and think that you’re doing it right, but you’re not. Wesley Snipes’s line in White Men Can’t Jump may not be appropriate for this post, but Druva came up with one that is: “A VCR in the cloud doesn’t give you Netflix”. When you’re looking at cloud-based data protection solutions, you need to think carefully about just what’s on offer. Druva have worked through a lot of these requirements and claim their solution:

  • Is fully managed (no need to deploy, manage, support software);
  • Offers predictable lower costs
  • Delivers linear and infinite (!) scalability
  • Provides automatic upgrades and patching; and
  • Offers seamless data services.

I’m a fan of the idea that cloud services can offer a somewhat predictable cost models to customers. One of the biggest concerns faced by the C-level folk I talk to is the variability of cost when it comes to consuming off-premises services. The platform also offers source side global deduplication, with:

  • Application-aware block-level deduplication;
  • Only unique blocks being sent; and
  • Forever incremental and efficient backups.

The advantage of this approach is that, as Druva charge based on “post-globally deduped storage consumed”, chances are you can keep your costs under control.

 

It Feels Proper Cloudy

I know a lot of people who are in the midst of the great cloud migration. A lot of them are only now (!) starting to think about how exactly they’re going to protect all of this data in the cloud. Some of them are taking their existing on-premises solutions and adapting them to deal with hybrid or public cloud workloads. Others are dabbling with various services that are primarily cloud-based. Worse still are the ones assuming that the SaaS provider is somehow magically taking care of their data protection needs. Architecting your apps for multiple geos is a step in the right direction towards availability, but you still need to think about data protection in terms of integrity, not just availability. The impression I got from Druva is that they’ve taken some of the best elements of their on-premises and cloud offerings, sprinkled some decent security in the mix, and come up with a solution that could prove remarkably effective.

Tech Field Day – I’ll Be At TFD Extra at VMworld US 2017

The name says it all. I mentioned recently that I’ll be heading to the US in less than a week for VMworld. This is a quick post to say that I’ll also have the opportunity to participate in the Tech Field Day Extra event while at VMworld.  If you haven’t heard of the very excellent Tech Field Day events, I recommend you check them out. You can also check back on the TFDx website during the event as there’ll likely be video streaming along with updated links to additional content. You can also see the list of delegates and event-related articles that they’ve published.

I think it’s a great line-up of companies this time around, with some I’m familiar with and some not so much. I’m attending three or four of the sessions and recommend you tune in if you can to hear from Druva, Kingston, Pluribus Networks and NetApp.

Here’s the calendar of events as it stands (note that this might change).

August 29, 2017 11:00-12:00 Kingston Presents at Tech Field Day Extra at VMworld US 2017
Delegate Panel: Dan Frith, Mike Preston, TBD
August 29, 2017 13:00-14:00 Druva Presents at Tech Field Day Extra at VMworld US 2017
Delegate Panel: Dan Frith, Mike Preston, Sean Thulin, TBD
August 29, 2017 14:30-15:30 Pluribus Networks Presents at Tech Field Day Extra at VMworld US 2017
Delegate Panel: Carl Fugate, Eyvonne Sharp, Mike Preston, Rob Coote, Sean Thulin, TBD
August 29, 2017 16:00-17:00 NetApp Presents at Tech Field Day Extra at VMworld US 2017
Delegate Panel: Carl Fugate, Dan Frith, Eyvonne Sharp, Mike Preston, Rick Schlander, Rob Coote, Sean Thulin, TBD

 

Storage Field Day – I’ll Be At Storage Field Day 13 and Pure Accelerate

Storage Field Day 13

In what can only be considered excellent news, I’ll be heading to the US in early June for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to time travel and spending time with some really smart people for a few days. It’s also worth checking back on the Storage Field Day 13 website during the event (June 14 – 16) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of presenting companies this time around. There are a few I’m very familiar with and some I’ve not seen in action before.

*Update – NetApp have now taken the place of Seagate. I’ll update the schedule when I know more.

 

I won’t do the delegate rundown, but having met a number of these people I can assure the videos will be worth watching.

Here’s the rough schedule (all times are ‘Merican Pacific and may change).

Wednesday, Jun 14 09:30-10:30 StorageCraft Presents at Storage Field Day 13
Wednesday, Jun 14 16:00-17:30 NetApp Presents at Storage Field Day 13
Thursday, Jun 15 08:00-12:00 Dell EMC Presents at Storage Field Day 13
Thursday, Jun 15 13:00-14:00 SNIA Presents at Storage Field Day 13
Thursday, Jun 15 15:00-17:00 Primary Data Presents at Storage Field Day 13
Friday, Jun 16 10:30-12:30 X-IO Technologies Presents at Storage Field Day 13

 

Storage Field Day Exclusive at Pure Accelerate 2017

You may have also noticed that I’ll be participating in the Storage Field Day Exclusive at Pure Accelerate 2017. This will be running from June 12 – 14 in San Francisco and promises to be a whole lot of fun. Check the landing page here for more details of the event and delegates in attendance.

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as Pure Storage for having me along to their event as well. Also big thanks to the companies presenting. It’s going to be a lot of fun. Seriously.

ClearSky Data Are Here To Help

Disclaimer: I recently attended VMworld 2016 – US.  My flights were paid for by myself, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

TFD-Extra-VMworld-300

clearsky_orange_blue

ClearSky Data presented recently at Tech Field Day Extra VMworld US 2016. You can see video from the presentation here. My rough notes on the session are here.

 

Overview

Lazarus Vekiarides, CTO and Co-founder, took us through an overview. “ClearSky’s Global Storage Network delivers enterprise storage, spanning the entire data lifecycle, as a fully-managed service”. Sounds good. I like when people talk about lifecycles, and fully managed. These things are hard to do though.

ClearSky are aiming to provide “the performance and availability of on-premises storage with the economics and scale of the cloud”. They do this with:

  • economics
  • scalability
  • reliability
  • security
  • performance

According to ClearSky, we’ve previously used a “Fragmented Hybrid” model when it comes to cloud storage.

tfdx-clearskydata-fragmented_hybrid

I must have been watching too much Better Off Ted with my eldest daughter, but when I heard of the Global Storage Network, it sounded a lot like something from a Veridian Dynamics advertisement. It’s not though, it’s cooler than that. With the Global Storage Network, ClearSky brings it all together.

tfdx-clearskydata_globalstoragenetwork

You can read a whitepaper from ClearSky here, and there’s a data sheet here.

 

These Pictures are Compelling, But What Is It?

ClearSky say they are changing how enterprises access data

  • eliminate storage silos
  • pay only for what you use – up to 100% useable storage only
  • guaranteed 100% uptime
  • multi-site data access without replication
  • maximum of 30minute response time for Sev 1 and 2 tickets

tfdx-clearsky_data_at_a_glance

This is all delivered via consumption-based model. The idea behind this is that you get charged for only the capacity you use, but your applications have all the performance they need. Like all good consumption models, if you delete data, you give back the space ClearSky and are no longer billed for any of it.

“Customers simply plug into the ClearSky service to get the storage they need, when and where they need it, with the security, scalability and resilience that a business depends on.”

 

I’m Still Not Sure

That’s because I’m bad at explaining things. There’s an edge appliance (2RU appliance / 24 slots – about 6TB of flash cache) that is used. Cache is available (on resilient storage), but not copied. ClearSky POPs then offer distributed and optimised storage, with multiple copies to the cloud. Maybe a picture will explain it a bit better.

tfdx-clearskydata-architecture

With this architecture, ClearSky manages the entire data lifecycle. Active data lives either next to your applications, or in the metro area near your applications. Any cold data, backup and DR stuff is stored as multiple copies of data geographically dispersed in the network.

There’s support for iSCSI or FC today and write back cache is processed every 10 minutes and pushed to the metro cache or cloud.

 

What Do I Use It For?

Data in the ClearSky network can be accessed from multiple locations without replication, offering mobility and availability.

Multi-site availability

  • Load balancing and disaster recovery

Workload mobility

  • In-metro and cross-metro
  • Application data can be accessed from other metros

And you can use it in all the ways you think you would, including DR, DC migration, and load balancing.

 

Make it Splunky

You probably know that companies use Splunk to analyse machine data. I’ve used it at home to munge squid logs when trying to track my daughter’s internet use. Splunk captures, indexes and correlates machine data in a searchable repository from which it can generate graphs, reports, alerts, and visualisations. Spunk demands high performance and agile storage, and ClearSky have some experience with this. There’s also a Splunk Reference Architecture. ClearSky say they’re a good fit for Splunk Enterprise. The indexers simply write to the ClearSky Edge Cache & ClearSky manages index migration through cache and storage layers – greatly simplifying the solution. They also offer “[h]ighly consistent ingest performance, cloud capacity, and integrated backup using ClearSky snapshot technology”.

 

Conclusion

This was the first time I’d encountered ClearSky Data, and I liked the sound of a lot of what I heard. They make some big claims on performance, but the architecture seems to support these, at least on the face of it. I’m a fan of people who are into fully-managed data lifecycles. I hope to have the opportunity to dig further into this technology at some stage to see if they’re the real deal. People use caching solutions because they have the ability to greatly improve the perceived (and actual) performance of infrastructure. And managed services are certainly popular with enterprises looking at alternatives to their current, asset-heavy, models of storage consumption. If ClearSky can do everything it says it can, they are worth looking into further.

Scality’s RING has a lot going on

Disclaimer: I recently attended VMworld 2016 – US.  My flights were paid for by myself, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

TFD-Extra-VMworld-300

 

scality

Here are my notes from Scality’s presentation at Tech Field Day Extra VMworld US 2016 Edition. You can get a rough copy here. You can also view videos of the Scality presentation here.

 

The Ring?

Like the movie? No. The RING. The Scality RING is object-based software-defined storage for the cloud. It runs on standard x86 servers to create a giant pool of storage.

scality_architecture_ring

[image via Scality website]

It can also protect the data and provides 100% reliable, high performance access for any capacity-driven applications. While it can run on any x86 hardware, it was pointed out that “[s]ome servers are better than others”.

Customers are telling Scality that:

  • The “cloudification” of enterprise IT is accelerating
  • Enterprise wants “multiple clouds”
  • Object is the best for large capacity storage, and S3 is the standard API
  • Files are integral part of enterprise IT
  • DevOps influences infrastructure choices

Scality have 116 customers so far, spread across the globe (50% North America, 35% EMEA, 15% APAC). Scality are big on hardware alliances (being a software play, this makes sense), and have agreements in place with HPE, Dell, and Cisco.

 

(The) RING 6.0 – A better sequel than we’d hoped for

Paul Speciale,  VP of Products at Scality,  took us through some of the features of RING 6.0.

tfdx-scality-ring6-0

The focus for Scality with 6.0 has been on

  • “Enterprization” – I’m not sure it’s a real word, but I do like the connotation
  • S3 Connector – Enterprise Deployments
  • Easy deployment model
  • Secure multi-tenancy and data at rest
  • Directory services federation
  • Utilisation reporting and management

 

Easy Deployment Model

  • All services deployed uniformly as Docker containers
  • Full scale-out: Any S3 request can be handled by any S3 Connector (“any-to-any”), standard IP load balancing and failover

Vault Service

  • Implements IAM Multi-tenancy with Accounts, Users, Groups, Roles, Access Key/Secret Key pairs
  • IAM REST compatible managed via AWS CLI
  • Can be federated with Active Directory over ADFS/sAML 2.0

Metadata Service

  • S3 optimised service: fast, available, scale-out
  • Integral in RING layer – leveraged for Bucket & Vault metadata

 

Comprehensive IAM multi-tenancy and encryption

AWS Identity and Access Management (IAM)

  • S3 Connector implements all IAM multi-tenancy concepts: Accounts, Keys, Users, Groups, Roles
  • IAM policies for highly granular access control
  • AWS compatible: Management of IAM entities (Users, Groups) via standard AWS CLI and JSON policy language
  • Secure authentication via AWS Signature v4 and v2 HMAC schemes

Bucket-level Encryption

  • Pre-bucket encryption-at-rest of object data (specified through header on Bucket PUT)
  • Encryption via AES-256bit OpenSSL libraries
  • Integrates with customer-provided Key Management Service (KMS) via KMIP 1.1 API
  • KMS is invoked on PUT and GET operations

tfdx-scality-comprehensiveencryption

 

Federated Access SSO to S3

  • Requires a SAML 2.0 Compatible ldP
  • ldP provides mapping from Enterprise Direcoty Server (AD)
  • Vault enables SSO via SAML assertion

 

S3 Utilization Reporting and Management

Stats and management framework

  • Real-time and historical statistics and metrics collected in scalable repository

Published RESTful APIs for monitoring and management

  • S3 Connector publishes key utilisation metrics (capacity, bandwidth and operations) at four levels of granularity
  • REST APIs for custom tool integrations

Management tools

  • User and Group management via standard AWS commands (CLI) and REST API
  • Integrated tools for graphing, metrics, log visualisation and search: Elastic Search and Kibana, Grafana, Redis.

 

S3 Metadata – the scale-out engine of the connector

Metadata Service

  • Purpose-built for availability, resiliency, scale-out and fast performance for requirements of S3 operations
  • Key/value store replicated on SSDs (one per server)
  • Additional copy maintained as diff backup in RING for DR

The hard part: Distributed Consensus Algorithm

  • Leader with dynamic election and management of consistency (modified Raft protocol)
  • Can be distributed across DCs to enable multi-geo operations
  • By default, strict consistency rules enforced

High-availability and Performance

  • The cluster consists of multiple servers – odd number to provide majority quorum (5, 7 or 9)
  • As long as the majority (quorum) of servers is available, the service and Bucket remain available
  • Restarts failed servers with automated resynchronization

 

S3 Connector Scale-out at all levels

tfdx-scality-s3_connector_scale-out_at_all_levels

S3 as the best On-ramp to Object Storage

tfdx-scality-s3_on-ramp

  • Developers can install and develop S3-based apps locally
  • Enterprises can host a small, local object storage systems in production
  • Enterprise can host a local test/dev environment to learn about object storage

 

Scality Open Source S3 Server

S3 API Compatible with the S3 Connector

  • Single Docker Container for simplified deployment
  • Stores data in local Docker Volume (local storage)
  • Metadata managed in single key/value database
  • S3 compatible Bucket and Object operations, error and response codes

Downloadable on Docker Hub

  • Can be pulled via UI or Docker pull command as per instructions on s3.scality.com
  • Can be hosted on laptops and single servers
  • Seamless transition to scale-out solution on RING

ISV Certified with multiple solutions

  • Backup, archive, sync-n-share, surveillance, migration

 

Summary

So what do you get with Scality?

  • S3 Server & S3 Connector
  • Provides a seamless transition from “free” test/dev single-server trial to full scale-out deployments (note that the trial is not available to robots).
  • Small to large deployments from local storage to full RING
  • Simple to deploy via Docker containers
  • Comprehensive Enterprise Deployment Features
  • Multi-tenancy
  • Active Directory SSO/federation

 

Further Reading and Thoughts

Justin did a comprehensive write-up on Scality here. Sure, I could have saved you a lot of time and sent you there in the first place, but that’s not how I roll. I admit I’m not super familiar with Scality and have yet to get cracking with the RING trial. That said, with version 6.0 they seem to included a lot of features that enterprises are interested in when looking at object storage with cloudy tendencies. There’s decent support for file protocols such as NFS and SMB, just no block. I covered some of the other enterprise features above, and they’ve been around for a little while now. But that’s not what the kids are into these days in any case. If you’re looking into rolling your own object solution, I recommend giving Scality a spin.

So NooBaa, eh?

Disclaimer: I recently attended VMworld 2016 – US.  My flights were paid for by myself, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

TFD-Extra-VMworld-300

noobaa_logo

I had the opportunity to speak with NooBaa about six months ago. At the time they were still developing their product, but I thought it looked pretty cool. At Tech Field Day Extra,  they demoed their cloud services engine. The company was founded by Yuval Dimnik (Co-founder and CEO) and Guy Margalit (Co-founder and CTO). If you’re familiar with Exanet or Dell FluidFS, you’ll be familiar with some of their capabilities. NooBaa was founded in 2014, with a product launch in September 2016, and a current headcount of 14 (they tell us have a strong security/storage DNA).

“Customers don’t care how you do your tech, they care how it fixes their problems”

 

So NooBaa, eh?

They have thought about the name. A lot. It’s a pure software product enabling folks to create and provision cloud services

  • Storage (like AWS S3) – First!
  • Serverless compute (like AWS Lambda) – Future

The key is that the customer owns the service, with

  • Full control of who accesses what, and what stays on-premises
  • No cloud vendor lock-in

The services use

  • Heterogeneous resources – cloud resources and servers
  • In the cloud, on-premises, and spanned

So, take all the spare storage you have lying about on Windows and Linux VMs, bang it all in a single namespace and present it back to your object-friendly apps. Replicate it to the cloud if you like. Or use all your spare clouds. Sounds like a cool idea.
Design Considerations (once bitten, twice shy)

They wanted to design a product that behaves like the cloud, but gives you the choice to consume from on-premises or cloud.

But can you predict the unpredictable?

  • Cloud strategy? Everyone has one of those, they’re just not sure what it really means.
  • Growth rate? Oh, it grows a lot.
  • Hardware technologies? Yep, software still needs hardware.
  • Vendors? Who can really work out what they do?
  • Organisational changes?
  • Security issues and lurking “heart bleeds”?

Stuff is hard. Along with this, NooBaa were looking to add the following capabilities

  • On-premises, multi-cloud, and supporting cloud migration
  • P2P scalable capacity
  • Monitor hardware and adapt
  • Agnostic to the machine
  • Allowed to grow, allowed to shrink
  • User space as a religion – when you need to fix that you can do it right away

Architecture

NooBaa is all about a hybrid approach to resources, supporting multiple cloud providers and on-premises resources. It also has support for multiple sites.

tfdx-noobaa-architecture1

The key to NooBaa’s storage performance in what might seem to be non-performant environments is the way it stores data, as you can see in the below diagram.

tfdx-noobaa-architecture2

 

Note that they’re not targeting low-latency workloads. At this stage they’re cloud agnostic and hoping to keep things that way. Heterogeneous resources are key for NooBaa. You can also sign up for the Community Edition – limited to 20TB aggregate object size.
Final Thoughts and Reading

 

The name doesn’t roll off the tongue, and the colour-scheme is very pretty. But I think this belies the thought that’s gone into this product. Yuval and his team have a strong background in scalable object storage, and I’m excited to see them finally come out of stealth. The concept of treating storage nodes as second class citizens is interesting, and I’m looking forward to taking the Community Edition for a spin when I get my act together in the near future. In the meantime, head over to Alastair’s blog for a more succinct write-up on what we saw. John White also did a great post here. You can grab a copy of my raw notes here, and watch NooBaa’s TFDx presentations here.

 

Paessler have been doing this for a while now

Disclaimer: I recently attended VMworld 2016 – US.  My flights were paid for by myself, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

TFD-Extra-VMworld-300

paessler-logo

Paessler recently presented at Tech Field Day Extra – VMworld US 2016. My rough notes can be found here. You can see videos from the presentation here and here.

 

What’s a Paessler?

Benjamin Day, Senior Systems Engineer with Paessler took us through some of the background on the company. Founded in 1997 in Nuremberg, Germany, they are 100% owned by founders and employees. The US is their largest market and they tell us that over 70% of Fortune 100 enterprises worldwide use PRTG.

 

What’s a Sensor?

PRTG is often referred to as “MRTG for Windows”. When I say often, I mean it was mentioned by Paessler yesterday. But they also say it on their website. You can get a product overview from here. You can also check out a demo here.

So what are sensors? PRTG is defined (built and licensed) at the sensor level. Pretty much anything you would monitor is a sensor (you can read more on that here). Note also that it’s one sensor, but not one metric (these are known as channels). Generally speaking you can count on using 5-10 sensors per device. Here’s an image I swiped from the Paessler website that kind of shows what sensors look like.

TFDx - Paessler - PRTG Sensor_web

Licences come in lots of 500, 1000, 2500, 5000, and Unlimited. The good things is that they’re not named, so Christopher doesn’t have to monitor those printers if he really doesn’t want to.
From a notification perspective, there are a bunch of options to get the message out, and you can send things via:

  • Email;
  • SMS (through third-party or IP-enabled SMS gateways);
  • PRTG-enabled smart devices (there’s a mobile app);
  • syslog; and
  • SNMP traps.

There are also options for auto remediation, and you can do things via a script (powershell, shell, etc) or, amongst other things, kick off a web action (handy for ticketing systems)

 

Thresholds and Notifications

There are all sorts of things you can do in terms of actions when you exceed thresholds, including:

  • Sending email
  • Sending push notifications (to a user or group, and you can customise the message)

You can modify the format – html, text, text with custom content and customise the priority. You can add entry to event logs and send Amazon simple notification service message. You might want to assign a ticket as well.

Note also that PRTG is multi-tenant capable, making it an interesting choice for service providers. There’s also an option to “white box” it with your own logo if you’re into that kind of thing. Note that MSP licensing is done in a different fashion to normal licensing.

My favourite thing (besides what seems like a pretty comprehensive monitoring capability and lightweight deployment requirement) is that every sensor has a QR code. And the PRTG app has a QR code scanner (you see where I’m going with this?). You can print out the device QR codes and they’re come up in PRTG. There’s no longer a requirement to faff about with long labels on hosts. If you’re using per port sensors on your switches, you can put a QR code on the cable.

 

Conclusion

Paessler have been doing this for almost 20 years now. It strikes me that the product seems easy to deploy and use while being fairly powerful and feature-rich. If you’d like to try PRTG out there’s a free license you can use for both personal and commercial use. This is limited to 100 sensors.

If you can monitor it with SNMP (their preference) or WMI, and are happy to use a Windows platform, then PRTG could be the tool for you. I recommend checking them out.