Wednesday, December 19, 2018

FlexCache is BAAAAAAACK!!!!

We're starting to hear alot of "Global Namespace" in the industry.  Buzz, buzz, buzz.  Well, what really is a "Global Namespace"?   Well, it can be broken into 2 categories.

  1. Your whole data structure is under one "root" volume and the data is actually stored all over the place.  But it's all under one directory tree with soft links, symlinks, junction points, widelinks, and a whole bevy of tricks.

    OR
  2. Your dataset is available in multiple places in a read-write configuration (caching) and no matter where you are in the company's network (via VPN, overseas, in a remote office, at the datacenter) you get a similar response time to that dataset and its all read-writable.

Which one do YOU want?

The first one SOUNDS great, but what benefit does it have?   Well, now you don't have to go search the P: drive AND the M: drive AND the T: drive for what you want, you just have one MASSIVE-looking drive.  But your still searching through the subdirs to find your data.  It's nice... but just smoke and mirrors and your gaining VERY little.

On the Linux side, its a LITTLE better since your automounter just mounts stuff automagically under some path (maybe /data) and everything underneath looks like a single file system.  BUT you have the same Windows dilemma.  Is it REALLY better for your clients?  Also think of applications that now have to go down some unGAWDly path to get at its data.

THEN you have the whole performance issue.  Your Windows P: drive, which is the same as the Linux /data/pdrive path is hella ethernet miles away and performance SUUUX.  

Sooooooooo

Let's look at the second one.  Now this sounds good, but is it?  Well, this means that your data is in multiple places at once and that when you look at it from anywhere in the world, the performance is "local" and its a writable copy.   Awesome!  How does all this coordinate?  Well, ONTAP has a cool new thing called FlexCache.  Well, is it really new?   Well, yes it is. But it isn't.  
We had something in 7-mode called FlexCache.  It was good for its purpose, but lacked enhancements.    Bring the data in a read-heavy, but still writable capacity closer to whatever needs it. It was great for what it did, but the potential to move it forward and give it more features was limited by the backend technology.  Enter FlexCache in ONTAP starting with 9.5.  Its a whole new rewrite of the technology which can give so much more!

First of all, what are the use cases?  There are a few.  The first ones that come to mind is working on the same data in multiple places.  This is an example of AI, EDA, Media Rendering, Code distribution, and other similar workloads.  You have a single "master" data set, but you want other pieces to be able to read the same data and write some results or something into that same dataset.  The next one that comes to mind is creating multiple read-heavy copies to keep the "hot volume" syndrome down.  We all know there's challenges when multiple clients are reading the exact same file(s).  So let's spread the load!

So...  how do I get it and use it?
Starting in ONTAP 9.5, the new FlexCache is available. Give it a spin.  Let your mind explore the options you can use this new feature for.

I do have a Technical Report for the deep dive.  You can find it here:  https://www.netapp.com/us/media/tr-4743.pdf

Go for it!


3 comments:

Ryan Wood said...

Any idea if FlexCache will be expanded in the future to support SMB protocol?

Just some guy (Chris Hurley) said...

Yes it will. Stay tuned.

Unknown said...

Chris,
I work for a NetApp partner. I was listening to the podcast on flex cache and I have quite a few customers interested in using SMB with Flex Cache. I am struggling to find documentation on how to set it up. Is there a step by step guide on how to set it up? I know you need to create a Flex Group but then there really isn’t any documentation on how to create the Flex Cache relationship.
Can you help point me in the right direction? The TR document doesn't walk through it either.
Thanks, Dave