Random Short Take #59

Welcome to Random Short take #59.

  • It’s been a while since I’ve looked at Dell Technologies closely, but Tech Field Day recently ran an event and Pietro put together a pretty comprehensive view of what was covered.
  • Dr Bruce Davie is a smart guy, and this article over at El Reg on decentralising Internet services made for some interesting reading.
  • Clean installs and Time Machine system recoveries on macOS aren’t as nice as they used to be. I found this out a day or two before this article was published. It’s worth reading nonetheless, particularly if you want to get your head around the various limitations with Recovery Mode on more modern Apple machines.
  • If you follow me on Instagram, you’ll likely realise I listen to records a lot. I don’t do it because they “sound better” though, I do it because it works for me as a more active listening experience. There are plenty of clowns on the Internet ready to tell you that it’s a “warmer” sound. They’re wrong. I’m not saying you should fight them, but if you find yourself in an argument this article should help.
  • Speaking of technologies that have somewhat come and gone (relax – I’m joking!), this article from Chris M. Evans on HCI made for some interesting reading. I always liked the “start small” approach with HCI, particularly when comparing it to larger midrange storage systems. But things have definitely changed when it comes to available storage and converged options.
  • In news via press releases, Datadobi announced version 5.12 of its data mobility engine.
  • Leaseweb Global has also made an announcement about a new acquisition.
  • Russ published an interesting article on new approaches to traditional problems. Speaking of new approaches, I was recently a guest on the On-Premise IT Podcast discussing when it was appropriate to scrap existing storage system designs and start again.

 

Storage Field Day 22 – I’ll Be At Storage Field Day 22

Here’s some news that will get you excited. I’ll be virtually heading to the US this week for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. It’s also worth visiting the Storage Field Day 22 website during the event (August 4-6) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of both delegates and presenting companies this time around.

 

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as my employer for letting me take time off to attend these events. Also big thanks to the companies presenting. It’s going to be a lot of fun. Last time was a little weird doing this virtually, rather than in person, but I think it still worked. As things open back up in the US you’ll start to see a blend of in-person and virtual attendance for these events. I know that Komprise will be filming its segment from the Doubletree. Hopefully we’ll get things squared away and I’ll be allowed to leave the country next year. I’m really looking forward to this, even if it means doing the night shift for a few days. Presentation times are below, and all times are US/Pacific.

Wednesday, Aug 4 8:00-9:30 Infrascale Presents at Storage Field Day 22
Wednesday, Aug 4 11:00-13:30 Intel Presents at Storage Field Day 22
Presenters: Allison GoodmanElsa AsadianKelsey PrantisKristie MannNash KleppanSagi Grimberg
Thursday, Aug 5 8:00-10:00 CTERA Presents at Storage Field Day 22
Presenters: Aron BrandJim CrookLiran Eshel
Thursday, Aug 5 11:00-13:00 Komprise Presents at Storage Field Day 22
Presenters: Krishna SubramanianMike PeercyMohit Dhawan
Friday, Aug 6 8:00-9:00 Fujifilm Presents at Storage Field Day 22
Friday, Aug 6 10:00-11:30 Pure Storage Presents at Storage Field Day 22
Presenters: Ralph RonzioStan Yanitskiy

Cohesity DataProtect Delivered As A Service – SaaS Connector

I recently wrote about my experience with Cohesity DataProtect Delivered as a Service. One thing I didn’t really go into in that article was the networking and resource requirements for the SaaS Connector deployment. It’s nothing earth-shattering, but I thought it was worthwhile noting nonetheless.

In terms of the VM that you deploy for each SaaS Connector, it has the following system requirements:

  • 4 CPUs
  • 10 GB RAM
  • 20 GB disk space (100 MB throughput, 100 IOPs)
  • Outbound Internet connection

In terms of scaleability, the advice from Cohesity at the time of writing is to deploy “one SaaS Connector for each 160 VMs or 16 TB of source data. If you have more data, we recommend that you stagger their first full backups”. Note that this is subject to change. The outbound Internet connectivity is important. You’ll (hopefully) have some kind of firewall in place, so the following ports need to be open.

Port
Protocol
Target
Direction (from Connector)
Purpose

443

TCP

helios.cohesity.com

Outgoing

Connection used for control path

443

TCP

helios-data.cohesity.com

Outgoing

Used to send telemetry data

22, 443

TCP

rt.cohesity.com

Outgoing

Support channel

11117

TCP

*.dmaas.helios.cohesity.com

Outgoing

Connection used for data path

29991

TCP

*.dmaas.helios.cohesity.com

Outgoing

Connection used for data path

443

TCP

*.cloudfront.net

Outgoing

To download upgrade packages

443

TCP

*.amazonaws.com

Outgoing

For S3 data traffic

123, 323

UDP

ntp.google.com or internal NTP

Outgoing

Clock sync

53

TCP & UDP

8.8.8.8 or internal DNS

Bidirectional

Host resolution

Cohesity recommends that you deploy more than one SaaS Connector, and you can scale them out depending on the number of VMs / how much data you’re protecting with the service.

If you’re having concerns with bandwidth, you can configure the bandwidth used by the SaaS Connector via Helios.

Navigate to Settings -> SaaS Connections and click on Bandwidth Usage Options. You can then add a rule.

You then schedule bandwidth usage, potentially for quiet times (particularly useful in small environments where Internet connections may be shared with end users). There’s support for upload and download traffic, and multiple schedules as well.

And that’s pretty much it. Once you have your SaaS Connectors deployed you can monitor everything from Helios.

 

Random Short Take #58

Welcome to Random Short take #58.

  • One of the many reasons I like Chin-Fah is that he isn’t afraid to voice his opinion on various things. This article on what enterprise storage is (and isn’t) made for some insightful reading.
  • VMware Cloud Director 10.3 is now GA – you can read more about it here.
  • Feeling good about yourself? That’ll be quite enough of that thanks. This article from Tom on Value Added Resellers (VARs) and technical debt goes in a direction you might not expect. (Spoiler: staff are the technical debt). I don’t miss that part of the industry at all.
  • Speaking of work, this article from Preston on being busy was spot on. I’ve worked in many places in my time where it’s simply alarming how much effort gets expended in not achieving anything. It’s funny how people deal with it in different ways too.
  • I’m not done with articles by Preston though. This one on configuring a NetWorker AFTD target with S3 was enlightening. It’s been a long time since I worked with NetWorker, but this definitely wasn’t an option back then.  Most importantly, as Preston points out, “we backup to recover”, and he does a great job of demonstrating the process end to end.
  • I don’t think I talk about data protection nearly enough on this weblog, so here’s another article from a home user’s perspective on backing up data with macOS.
  • Do you have a few Rubrik environments lying around that you need to report on? Frederic has you covered.
  • Finally, the good folks at Backblaze are changing the way they do storage pods. You can read more about that here.

*Bonus Round*

I think this is the 1000th post I’ve published here. Thanks to everyone who continues to read it. I’ll be having a morning tea soon.

Cohesity DataProtect Delivered As A Service – A Few Notes

As part of a recent vExpert giveaway the folks at Cohesity gave me a 30-day trial of the Cohesity DataProtect Delivered as a Service offering. This is a component of Cohesity’s Data Management as a Service (DMaaS) offering and, despite the slightly unwieldy name, it’s a pretty neat solution. I want to be clear that it’s been a little while since I had any real stick time with Cohesity’s DataProtect offering, and I’m looking at this in a friend’s home lab, so I’m making no comments or assertions regarding the performance of the service. I’d also like to be clear that I’m not making any recommendation one way or another with regards to the suitability of this service for your organisation. Every organisation has its own requirements and it’s up to you to determine whether this is the right thing for you.

 

Overview

I’ve added a longer article here that explains the setup process in more depth, but here’s the upshot of what you need to do to get up and running. In short, you sign up, select the region you want to backup workloads to, configure your SaaS Connectors for the particular workloads you’d like to protect, and then go nuts. It’s really pretty simple.

Workloads

In terms of supported workloads, the following environments are currently supported:

  • Hypervisors (VMware and Hyper-V);
  • NAS (generic SMB and NFS, Isilon, and NetApp);
  • Microsoft SQL Server;
  • Oracle;
  • Microsoft 365;
  • Amazon AWS; and
  • Physical hosts.

This list will obviously grow as some of the support for particular workloads with DataProtect and Helios improves over time.

Regions

The service is currently available in seven AWS Regions:

  • US East (Ohio)
  • US East (N. Virginia)
  • US West (Oregon)
  • US West (N. California)
  • Canada (Central)
  • Asia Pacific (Sydney)
  • Europe (Frankfurt)

You’ve got some flexibility in terms of where you store your data, but it’s my understanding that the telemetry data (i.e. Helios) goes to one of the US East Regions. It’s also important to note that once you’ve put data in a particular Region, you can’t then move that data to another Region.

Encryption

Data is encrypted in-flight and at rest, and you have a choice of KMS solutions (Cohesity-managed or DIY AWS KMS). Note that once you choose a KMS, you cannot change your mind. Well, you can, but you can’t do anything about it.

 

Thoughts

Data protection as a service offerings are proving increasingly popular with customers, data protection vendors, and service providers. The appeal for the punters is that they can apply some of the same thinking to protecting their investment in their cloud as they did to standing it up in the first place. The appeal for the vendors and SPs is that they can deliver service across a range of platforms without shipping tin anywhere, and build up annuity business as well.

With regards to this particular solution, it still has some rough edges, but it’s great to see just how much can already be achieved. As I mentioned, it’s been a while since I had some time with DataProtect, and some of the usability and functionality of both it and Helios has really come along in leaps and bounds. And the beauty of this being a vendor-delivered as a Service offering is that features can be rolled out on a frequent basis, rather than waiting for quarterly improvements to arrive via regularly scheduled software maintenance releases. Once you get your head around the workload, things tend to work as expected, and it was fairly simple to get everything setup and working in a short period of time.

This isn’t for everyone, obviously. If you’re not a fan of doing things in AWS, then you’re really not going to like how this works. And if you don’t operate near one of the currently supported Regions, then the tyranny of bandwidth (i.e. physics) may prevent reasonable recovery times from being achievable for you. It might seem a bit silly, but these are nonetheless things you need to consider when looking at adopting a service like this. It’s also important to think of the security posture of these kinds of services. Sure, things are encrypted, and you can use MFA with Helios, but folks outside the US sometimes don’t really dig the idea of any of their telemetry data living in the US. Sure, it’s a little bit tinfoil hat but it you’d be surprised how much it comes up. And it should be noted that this is the same for on-premises Cohesity solutions using Helios. Then again, Cohesity is by no means alone in sending telemetry data back for support and analysis purposes. It’s fairly common and something your infosec will likely already be across how to deal with it.

If you’re fine with that (and you probably should be), and looking to move away from protecting your data with on-premises solutions, or looking for something that gives you some flexible deployment and management options, this could be of interest. As I mentioned, the beauty of SaaS-based solutions is that they’re more frequently updated by the vendor with fixes and features. Plus you don’t need to do a lot of the heavy lifting in terms of care and feeding of the environment. You’ll also notice that this is the DataProtect component, and I imagine that Cohesity has plans to fill out the Data Management part of the solution more thoroughly in the future. If you’d like to try it for yourself, I believe there’s a trial you can sign up for. Finally, thanks to the Cohesity TAG folks for the vExpert giveaway and making this available to people like me.

Ransomware? More Like Ransom Everywhere …

Stupid title, but ransomware has been in the news quite a bit recently. I’ve had some tabs open in my browser for over twelve months with articles about ransomware that I found interesting. I thought it was time to share them and get this post out there. This isn’t comprehensive by any stretch, but rather it’s a list of a few things to look at when looking into anti-ransomware solutions, particularly for NAS environments.

 

It Kicked Him Right In The NAS

The way I see it (and I’m really not the world’s strongest security person), there are (at least) three approaches to NAS and ransomware concerns.

The Endpoint

This seems to be where most companies operate – addressing ransomware as it enters the organisation via the end users. There are a bunch of solutions out there that are designed to protect humans from themselves. But this approach doesn’t always help with alternative attack vectors and it’s only as good as the update processes you have in place to keep those endpoints updated. I’ve worked in a few shops where endpoint protection solutions were deployed and then inadvertently clobbered by system updates or users with too many privileges. The end result was that the systems didn’t do what they were meant to and there was much angst.

The NAS Itself

There are things you can do with NetApp solutions, for example, that are kind of interesting. Something like Stealthbits looks neat, and Varonis also uses FPolicy to get a similar result. Your mileage will vary with some of these solutions, and, again, it comes down to the ability to effectively ensure that these systems are doing what they say they will, when they will.

Data Protection

A number of the data protection vendors are talking about their ability to recover quickly from ransomware attacks. The capabilities vary, as they always do, but most of them have a solid handle on quick recovery once an infection is discovered. They can even help you discover that infection by analysing patterns in your data protection activities. For example, if a whole bunch of data changes overnight, it’s likely that you have a bit of a problem. But, some of the effectiveness of these solutions is limited by the frequency of data protection activity, and whether anyone is reading the alerts. The challenge here is that it’s a reactive approach, rather than something preventative. That said, companies like Rubrik are working hard to enhance its Radar capability into something a whole lot more interesting.

Other Things

Other things that can help limit your exposure to ransomware include adopting generally robust security practices across the board, monitoring all of your systems, and talking to your users about not clicking on unknown links in emails. Some of these things are easier to do than others.

 

Thoughts

I don’t think any of these solutions provide everything you need in isolation, but the challenge is going to be coming up with something that is supportable and, potentially, affordable. It would also be great if it works too. Ransomware is a problem, and becoming a bigger problem every day. I don’t want to sound like I’m selling you insurance, but it’s almost not a question of if, but when. But paying attention to some of the above points will help you on your way. Of course, sometimes Sod’s Law applies, and things will go badly for you no matter how well you think you’ve designed your systems. At that point, it’s going to be really important that you’ve setup your data protection systems correctly, otherwise you’re in for a tough time. Remember, it’s always worth thinking about what your data is worth to you when you’re evaluating the relative value of security and data protection solutions. This article from Chin-Fah had some interesting insights into the problem. And this article from Cohesity outlined a comprehensive approach to holistic cyber security. This article from Andrew over at Pure Storage did a great job of outlining some of the challenges faced by organisations when rolling out these systems. This list of NIST ransomware resources from Melissa is great. And if you’re looking for a useful resource on ransomware from VMware’s perspective, check out this site.

ComplyTrust And The Right To Be Forgotten

I came across a solution from ComplyTrust a little while ago and thought it was worth mentioning here. I am by no means any kind of authority with this kind of stuff so this is very much a high-level view.

 

The Problem

Over the last little while (decades even?), a number of countries and local authorities have tightened up privacy regulations in the hope that citizens would have some level of protection from big corporations mercilessly exploiting their personal information for commercial gain. A number of these regulations (General Data Protection Regulation, California Consumer Privacy Act, etc.) include the idea of “the right to be forgotten”. This gives citizens the right to request, in particular circumstances, that data about them is not kept by particular organisations. Why is this important? We have pretty good privacy protection in Australia, but I still get recruiters moving from one organisation to another and taking contacts with them.

How Does This Happen? 

Think of all the backups of data that organisations make. Now think of how long some of those get kept for. For every 1 restore you do, you might have made 100 backups. Depending on what an organisation is doing for data protection, there are potentially thousands of copies of records relating to you stored on their infrastructure. And then, when a company gets acquired, all that data gets passed on to the acquiring company. Suddenly it becomes that much more difficult to keep track of which company has your data on file.

Not a week goes by where I don’t get an offer to buy contact details of VMware users or people interested in cloud products. There is a whole world of B2B marketing going on where your details are being sold for a very low price. Granted, some of this is illegitimate in the first place, so regulations aren’t really going to help you. But the right to be removed from various databases around the place is still important, and something that governments are starting to pay more attention to.

The challenge for these organisations is that they can’t exactly keep a database of people they’re meant to forget – it defeats the purpose of the exercise.

 

The Solution?

So what’s one possible solution? Forget-Me-Yes (FMY) is a “Software-as-a-Service API Platform specifically manages both organizational and individual Right-to-be-Forgotten (RtbF) and Right-of-Erase (RoE) compliance of structured data for Brazil’s LGPD, Europe’s GDPR, California Consumer Privacy Act (CCPA),  Virginia CDPA, Nevada SB220, and Washington Privacy Act (WPA)”.

It’s a SaaS offering going for US $39.99 per month. To get started, you authenticate the service with one or more databases you want to manage. In version 1 of the software, it only supports Salesforce. I understand that ComplyTrust is looking to expand support to get the solution working with Shopify, Marketo, and a generic SQL plugin. It stores just enough information to uniquely identify the person, and no more than that.

 

Thoughts and Further Reading

Some of us want to be remembered forever, but most of us place more value on the choice not to be remembered forever. As I said at the start, I have very little real understanding of the depth and breadth of some of the privacy issues facing both citizens and corporations alike. That said, working closely with data protection offerings on a daily basis, and being focused on data retention for fun and profit, I can see how this is going to become something of a hot topic as the world gets back to spending time trying to understand the implications of keeping scads of data on folks without their consent. Clearly, a solution like this from ComplyTrust isn’t the final word in addressing the issue, but it’s nice to see that folks are taking this problem seriously. I’m looking forward to hearing more about this product as it evolves in the next little while.

MDP – Yeah You Know Me

Data protection is a funny thing. Much like insurance, most folks understand that it’s important, normally dread having to use it, and dislike the fact that it costs money “just in case something goes wrong”. But then they get hit by ransomware, or Judy in Accounting absolutely destroys a critical spreadsheet, and they realise it’s probably not such a bad thing to have this “data protection”. Books are weird too. Not the idea that we’ll put a whole bunch of information in a file and make it accessible to people. Rather, that sometimes that information is given context and then printed out, sold, read, and stuck on a shelf somewhere for future reference. Indeed, I was a voracious consumer of technical books early in my career, particularly when many vendors were insisting that this was the way to share knowledge with end users. YouTube wasn’t a thing, and access to manuals and reference guides was limited to partners or the vendors themselves. The problem with technical books, however, is that if they cover a specific version of software (or hardware or whatever), they very quickly become outdated in potentially significant ways. As enjoyable as some of those books about Windows NT 4.0 might have been for us all, they quickly became monitor stands when Windows 2000 was released. The more useful books were the ones that shared more of the how, what, when, and why of the topic, rather than digging in to specific guidance on how to do an activity with a particular solution. Particularly when that solution was re-written by the vendor between major versions.

Early on in my career I got involved in my employer’s backup and recovery solution. At the time it was all about GFS backup schemes and DDS-2 drives and per-server protection schemes that mostly worked. It was viewed as an unnecessary expense and given to junior staff to look after. There was a feeling, at least with some of the Windows stuff, that if anything went wrong it would likely go wrong in a big way. I generally felt ill at ease when recovery requests would hit the service desk queue. As a result of this, my interest in being able to bring data back from human error, disaster, or other kinds of failure was piqued, and I went out and bought a copy of Unix Backup and Recovery. As a system administrator, it was a great book to have at hand. There was a nice combination of understandable examples and practical application of backup and recovery principles covered throughout that book. I used to joke that it even had a happy ending, and everyone got their data back. As I moved through my career, I maintained an interest in data protection (it seemed, at one stage, to go hand in hand with storage for whatever reason), and I’ve often wondered what people do when they aren’t given the appropriate guidance on how to best do data protection to meet their needs.

All of this is an extremely long-winded way of saying that my friend W. Curtis Preston has released his fourth book, the snappily titled “Modern Data Protection“, and it makes for some excellent reading. If you listen to him talk about why he wrote another book on his podcast, you’ll appreciate that this thing was over 10 years in the making, had an extensive outline developed for it, and really took a lot of effort to get done. As Curtis points out, he goes out of his way not to name vendors or solutions in the book (he works for Druva). Instead, he spends time on the basics (why backup?), what you should backup, how to backup, and even when you should be backing up things.

This one doesn’t just cover off the traditional centralised server / tape library combo so common for many years in enterprise shops. It also goes into more modern on-premises solutions (I think the kids call them hyper-converged) and cloud-native solutions of all different shapes and sizes. He talks about how to protect a wide variety of workloads and solution architectures, drills in on the importance of recovery testing, and even covers off the difference between backup and archive. Yes, they are different, and I’m not just saying that because I contributed that particular chapter. There’s talk of traditional data sources, deduplication technologies, and more fashionable stuff like Docker and Kubernetes.

The book comes in at a svelte 350ish pages, and you know that each chapter could have almost been a book on its own (or at least a very long whitepaper). That said, Preston does a great job of sticking to the topic at hand, and breaking down potentially complex scenarios in a concise and simple to digest fashion. As I like to say to anyone who’ll listen, this stuff can be hard to get right, and you want to get it right, so it helps if the book you’re using gets it right too.

Should you read this book? Yes. Particularly if you have data or know someone who has data. You may be a seasoned industry veteran or new to the game. It doesn’t matter. You might be a consultant, an architect, or an end user. You might even work at a data protection vendor. There’s something in this for everyone. I was one of the technical editors on this book, fancy myself as knowing about about data protection, and I learnt a lot of stuff. Even if you’re not directly in charge of data protection for your own data or your organisation’s data, this is an extremely useful guide that covers off the things you should be looking at with your existing solution or with a new solution. You can buy it directly from O’Reilly, or from big book sellers. It comes in electronic and physical versions and is well worth checking out. If you don’t believe me, ask Mellor, or Leib – they’ll tell you the same thing.

  • Publisher: O’Reilly
  • ISBN: 9781492094050

Finally, thanks to Preston for getting me involved in this project, for putting up with my English (AU) spelling, and for signing my copy of Unix Backup and Recovery.

Random Short Take #57

Welcome to Random Short Take #57. Only one player has worn 57 in the NBA. So it looks like this particular bit is done. Let’s get random.

  • In the early part of my career I spent a lot of time tuning up old UNIX workstations. I remember lifting those SGI CRTs from desk to desk was never a whole lot of fun. This article about a Sun Ultra 1 project bought back a hint of nostalgia for those days (but not enough to really get into it again). Hat tip to Scott Lowe for the link.
  • As you get older, you realise that people talk a whole lot of rubbish most of the time. This article calling out audiophiles for the practice was great.
  • This article on the Backblaze blog about one company’s approach to building its streaming media capability on B2 made for interesting reading.
  • DH2i recently announced the general availability of DxEnterprise (DxE) for Containers, enabling cloud-native Microsoft SQL Server container Availability Groups outside and inside Kubernetes.
  • Speaking of press releases, Zerto has made a few promotions recently. You can keep up with that news here.
  • I’m terrible when it comes to information security, but if you’re looking to get started in the field, this article provides some excellent guidance on what you should be focussing on.
  • We all generally acknowledge that NTP is important, and most of us likely assume that it’s working. But have you been checking? This article from Tony does a good job of outlining some of the reasons you should be paying some more attention to NTP.
  • This is likely the most succinct article from John you’ll ever read, and it’s right on the money too.

Rubrik Basics – Multi-tenancy – Create An Organization

I covered multi-tenancy with Rubrik some time ago, but things have certainly advanced since then. One of the useful features of Rubrik CDM (and something that’s really required for Envoy to make sense) is the Organizations feature. This is the way in which you can use a combination of LDAP sources, roles, and tenant workloads to deliver a packaged multi-tenancy feature to organisations either within or external to your company. In this article I’ll run through the basics of setting up an Organization. If you’d like to see how it can be applied in a practical sense, it’s worth checking out my post on deploying Rubrik Envoy.

It starts, as these things often do, by clicking on the gear in the Rubrik CDM UI. Select Organizations (located under Access Management).

Click on Create Organization.

You’ll want to give it a name, and think about whether you want to give your tenant the ability to do per-tenant access control.

You’ll want an Org Admin Role to have particular abilities, and you might like to get fancy and add in some additional roles that will have some other capabilities.

At this point you’ll get to select which users you want in your Organization.

Hopefully you’ve added the tenant’s LDAP source to your environment already.

And it’s worth thinking about what users and / or groups you’ll be using from that LDAP source to populate your Organization’s user list.

You’ll also need to consider which role will be assigned to these users (rather than relying on Global Admins to do things for tenants).

You can then assign particular resources, including VMs, vApps, and so forth.

You can also select what SLA Domains the Organization has access to, as well as Archival locations, and replication targets and sources. This becomes important in a multi-tenanted environment as you don’t want folks putting data where they shouldn’t.

At this point you can download the Rubrik Envoy OVA, deploy it, and connect it to your Organization.

And then you’re done. Well, normally you would be, but I didn’t select a whole lot of objects in this example. Click Finish and you’re on your way.

Assuming you’ve assigned your roles correctly, when your tenant logs in, he or she will only be able to see and control resources that belong to that particular Organization.