February 13, 2018

Fantastic program for the IT Press Tour #26

The IT Press Tour, the leading press event for IT press, just unveiled the list of participating companies for the 26th tour scheduled the week of 26th of February.

We recognize a few companies we already met in the past but also several new ones who drive some new IT segments.

Topics will be around big data, analytics, databases, data preparation, cataloging and insights, applications monitoring and performance management, in-memory computing, continuous development and GDPR soon in place in Europe.

Here is the list:
  1. Aerospike, leader in high performance NoSQL database,
  2. Alation, reference in data driven analytics for enterprises,
  3. Anaplan, pioneer in corporate planning management,
  4. Datadog, key player in modern applications monitoring and performance management,
  5. Frame, a leading secure cloud workspace platform,
  6. GridGain Systems, In-memory computing platform leader built on Apache® Ignite™,
  7. Harness.io, new player in Continuous Delivery-as-a-Service,
  8. MapD Technologies, top vendor in GPU-based database,
  9. and Waterline Data, fast growing actor in data discovering and cataloging.
This edition will be again huge with a dense program and top innovators. I invite you to follow us with @ITPressTour, #ITPT, various publications and reporters Twitter handles.


Share:

February 1, 2018

Nyriad for a new data protection

Nyriad plays with its name, coming from Myriad and Raid, this is what we suppose.

GPU-based products are hot, we have met several databases vendors leveraging such processors like BlazingDB, BlazeGraph, SQream Technologies, Kinetica or MapD Technologies and now it's about storage product.

The company targets big data and high performance computing. It has its roots with the Square Kilometre Array project in New Zealand having delivered 160TB/s processing 50PB/day. It works well with parallel file storage approach such Lustre.

Partners are Nvidia, SuperMicro, Microsemi, Tyan, HPC Systems, KingSpec, Revera, Datacom, WDC and Netlist and the company is actively looking for oems.

The company develops NSULATE, a GPU-accelerated block device. This storage product is a DAS with hundreds of devices. It's compatible with Linux file systems and with ZFS + NSULATE with SHA-256 delivers 2-4x faster performance than ZFS for Lustre. We don't have a lot of details on the solution and we expect to get more info soon.

The philosophy resides in the replacement of the RAID controller by a general purpose GPU board. We notice a divergence of information between the site and the paper doc as the site mentions up to 128 parities and the paper doc up to 255, meaning that potentially volume and disk could be big and obtain lots of flexibility. Hum... In fact this is far more than we need if we compare with traditional data protection mechanisms.

The GPU on the node could be used also for compute in addition to storage functions delivering a new compute + storage convergence example.

NCRYPT is the companion of NSULATE and offers GPU-accelerated cryptographic algorithms. It includes a real-time blockchain APIs that provides encryption, hashing and cryptographic signature generation.

It deserves a deep look for sure as it will bring something new to the table against the Mojette Transform from Rozo Systems, MemoScale or the Intel ISA-L functions.

Share:

January 29, 2018

DriveScale to democratize composable infrastructure

A hot topic recently emerged on the market, the Software Composable Infrastructure aka SCI and the IT Press Tour had the privilege to meet and visit DriveScale, one of the few drivers of this industry trend.

The idea if this approach is justified by the business that needs agility and scalability inspired by public clouds. The main characteristic is well summarized by the term disaggregation with better compute and storage resources segregation and granularity, controlled by an intelligent software layer.

The goal of this data center approach is to increase resource utilization thus reducing cost, improving ROI, optimizing TCO with realtime dynamic provisioning to react to new, changing and modern workloads. In a nutshell, adopting a SCI philosophy offers a cloud-like IT operations and represents a new iteration towards fully automated data center.

You get the idea, SCI is the right term to describe an IT infrastructure controlled, designed and built dynamically by software.


SCI provides a transparent service for upper applications layers and no applications changes are required. From single, simple to complex horizontal scalability, SCI supports recent applications operating on Hadoop, Spark, NoSQL databases such Cassandra and containers as well. This is also interesting to see that a commercial approach is justified to control open source applications.

This new architecture is a new iteration towards disaggregation with better granularity tough to deliver with classic or recent tools.

Founded in 2013 by 2 SUN ex-employees Tom Lyon and Satya Nishtala, DriveScale illustrates the long time tag line of the workstation company i.e "The network is the computer". Finally SUN has in a sense anticipated it but never delivered at least at this level as what we live today was an utopia 10 or 15 years ago. They don't forget their past colleagues so it's not a surprise to see Scott McNealy, James Gosling and Whitfield Diffie as advisors. With 25 employees today, DriveScale is frugal with VC money as they raised "only" $18 million so far but clearly we can expect a new round in 2018.

The solution is organized around 4 components:
  • Adapter aka DSA, currently a SAS to Ethernet 1U appliance that bridge compute and storage resources. We expect a NVMe flavor in 2018.
  • Management Server aka DMS controls the rack SCI service and is available as bare-metal or VM Linux instances.
  • Agents are lightweight Linux processes running in user mode on servers controlled by DMS.
  • and Central is a cloud-based administration to control and manage deployments globally.
DMS receives infos from agents for servers and DSA for storage.


A RESTful API is provided to integrate the DMS service with deployed and famous data center tools.


An other way to see this IT Infrastructure approach is "Disag. (hardware resources) and Compose" a new logical computer different from classic virtualization or container mode. In fact these 2 services can reside above SCI to offer application independence and mobility.


A DriveScale cluster groups diskless stateless servers and JBOD thanks to the DSA. This key elements has 2 12Gb four-lane SAS interfaces and 2 10Gb Ethernet interfaces. Power supplies are redundant as well. 80 disk drives are supported and could deliver a throughput of 80GB per second. For servers, DriveScale supports 2U, 1U, 1/2U and 1/4U models hosting an Agent.

In term of licensing, one adapter per JBOD costs $10,000, node licenses is at $2,000 per logical node per year, drive $25 per year to reach approximately $106,000 the first year for 30 nodes and 300 disks and thereafter $76,000.

Key partners for DriveScale are Hortonworks, Cloudera, MapR, DellEMC, HPE, Arista, Cisco, SuperMicro, Quanta, Promise, Sanmina and Foxconn with several resellers such WWY, DellEMC or Promark.


This effort is a new example of vendors trying to adopt a cloud model and limit on-premise erosion. The industry has already addressed this movement with hybrid cloud as they realized that dollars are going elsewhere especially to the big 3 cloud service providers pockets. Again this is the direction of the industry. DriveScale will benefit from infrastructure vendors who try to delay this certain move. The company is leading this industry SCI wave with a pure independent solution. Clearly 2018 will be an interesting year to watch...
Share:

January 23, 2018

Hedvig leads the SDS pack

Always a pleasure to visit Hedvig and meet Avinash Lakshman and his team, we had a new opportunity during the last IT Press Tour in December. Obvious leader i the SDSD space, the team has presented a new iteration of the product fully hardware and cloud agnostic.

The product, Hedvig Distributed Storage Platform aka DSP today in 3.0, was illustrated with a multi-cloud and cloud orchestration approach, validating again that software makes the difference. Thanks to the HPE investment, support and active participation of Milan Shetti, CTO datacenter infrastructure at HPE, as technical advisor, the company has made significant progress and market penetration. HPE has finally selected the right player after several tentatives that never took off.


Hedvig means independence and intelligence as finally the solution is not linked or limited to any interface - file, block and object - supporting any workload, any application, open to virtualization and containers, any hardware and working today the top 3 cloud service providers AWS, GCP and Azure. Clearly the message is around universal usage, providing a converging platform between primary and secondary storage across multiple applications.

Again like others players, the company provides a multi-cloud model surfing on an market hype. Two effects, "me two" so the company can be selected for that support and at the same time no differentiator against direct competition. But again you have to be multi-cloud, hybrid and cloud agnostic nowadays.

Hedvig defines itself as a HyperScale SDS, I should say as well, multi-protocol SDS.

In France BNP Paribas has chosen the product and designed a geo distributed data center across Paris and London.


To refresh readers on the product, we need to say the the philosophy has scalability, agility and resiliency in mind. Three components define the platform:

  • Storage Service as a software installed in Linux x86 systems, glued together. They formed the scale-out back-end.
  • Storage Proxy is the front-end exposing virtual disks to the applications tier with block or file protocol.
  • And APIs to integrate service close to the application, this is the object interface place.
Virtual disk is a core concept here delivering what many other SDS solutions don't and can't provide especially encryption, deduplication and compression.


DSP 3.0 introduced last summer provides three new features:
  1. FlashFabric with the ability to scale flash across private and public clouds and deliver optimized caching supporting PCIe, NVMe and 3D XPoint into s single platform,
  2. Encrypt360 for end-to-end encryption, deduplication done first, based on 256 bits AES per disk and accelerated via Intel processors,
  3. and new and improved CloudScale plugins for partners software like Veritas OST, VMware or Red Hat for container imaging.
We anticipate some great news from Hedvig in 2018 being present on the hot list of many players and of course customers.
Share:

January 18, 2018

iXsystems, the reference in open source storage

iXsystems visit during the last IT Press Tour in December was a good surprise as we discovered a hidden face of the company in addition to their product line.

First the company is not only a sales machine but also a designer of systems with its own manufacturing facility that assembles parts to build server and storage systems.

Founded in 2002 and profitable since the inception, iXsystems has an unique story driven by an open source DNA. It all started when Mike Lauth and Matt Olander acquired BSDi hardware business, 6 years after BSDi absorbed Telenet Systems Solutions Inc. The story was launched and never stops since that acquisition decision.

The philosophy chosen and proven since the origin was to prefer a private approach without nay VC money in order to stay away from financial pressure. We have to recognize that iXsystems has demonstrated that it works. The 2 other key decisions were and still are the customers interaction for product definition and to rely on open source software.


The result is impressive with strong figures, brand respected and recognized, important installed base with more than 1EB shipped. The customers list speaks for itself with famous names such Box, Evernote, VMware, LinkedIn, Sony, NBC, FOX, top universities and government research entities like NASA, LANL or LLNL.

The First activity is to produce servers powered by a series of top elements: CentOS, FreeBSD, Ubuntu, Red Hat Linux, TrueOS and storage servers running FreeNAS or TrueNAS with OpenZFS.

The firm is a system builder not a VAR with regular 5 to 10k servers produced and shipped globally per year. iXsystems provides of course full support for its configs. The team has selected multiple hardware providers and use Intel/AMD/ARM processors delivering 70% customs configurations. This is a key aspect of their success. iXsystems is a long true believer in open source and represents a hidden force, not so visible from the market except by specialists. The philosophy of the company is to be agnostic regarding hardware, OS and applications.

The second strong business is the storage appliance powered by FreeNAS or TrueNAS with OpenZFS added more recently. iXsystems is a key player in SDS perfectly illustrating the value delivered by this concept.

FreeNAS is an ubiquitous name available everywhere on the planet, great success and adoption.


As mentioned OpenZFS was added to the picture to solidify the storage engine and bring real new features especially: triple RAID parity, unified file, block and object interfaces, thin provisioning, integrated volume management, inline compression and deduplication and of course high capacity support as the file system uses a 128bits model.


The SOHO, SMB positioning of FreeNAS helps iXsystems to penetrate accounts and delivers certified rackmount systems and late introduced TrueNAS, the enterprise storage flavor. What are the differences between FreeNAS and TrueNAS:
  • The first difference is the availability how these solutions are delivered: FreeNAS is a free downloadable software and TrueNAS is integrated with the iXsystems storage appliance line.
  • The second resides in the support: FreeNAS is supported by the community and TrueNAS is commercially supported.
  • The third is based on the delivery chosen, hardware for TrueNAS so performance usability optimization and completely open for FreeNAS.
  • and the availability offered by TrueNAS leveraging some key features of OpenZFS. Interfaces is also a key element with FC, available at 2, 4 and 8GB/s, active/active storage controller, TrueCache, hypervisor certifications, unlimited snapshots, data at-rest and in-flight encryption.
One other way to see differences resides in the usage of these products. In fact, FreeNAS is for non-critical informations and TrueNAS when things starts to be critical.

The TrueNAS Z35 can scale to 4.8PB of raw storage capacity in only 35U of rack space. The X20 can scale to 720TB in only 6U of rack space. The density and cost per TB of TrueNAS storage makes this a compelling option.

The company has also some ideas to deliver an object storage solution and thus deliver an even more comprehensive portfolio. For that the team has selected one of the best object storage, of course open source, of the planet. I lets you imagine who is it. We expect great new things in 2018.
Share:

January 8, 2018

New scale-out NAS generation with Qumulo

Qumulo, famous Scale-Out NAS vendor, offered last month to the IT Press Tour crew a superb interactive session where we discovered what really make them pretty unique on the market.

Founded in March 2012 in Seattle but really launched in March 2015, Qumulo has raised $130M from top investors and has of today around 200 employees. The firm was created by several formers Isilon leaders and has probably one of the best file system team in the storage industry having participate to the revolution early 2000 with Isilon, still the reference in Scale-Out NAS.

Qumulo, in addition to a few other strong technology players in file storage, participate to push object storage in the corner, where that approach should stay, I mean capacity and long-term data preservation. Some players have made some tries to offer a file system mode, in fact file storage, but it only works on the paper having some difficulties, both as a team and as the company management, and it fails with data integrity problem and high latency issues. Just think about 2 core functions of the file system: rename() and link() and you got an idea of challenges to solve not to say again the need for a strict consistency model. These points are important to confirm that building a file system on top of an object storage is still a dream or an utopia but the reverse is easier, offering a object storage API on top of a file system. It explains why object storage had real market difficulties in 2017.

Known for its innovative approach in distributed file system, Qumulo has recently repositioned its message around Qumulo File Fabric aka QF2. Funny to see that many companies use the term Fabric to finally replace and extend what we name a few years ago FAN i.e File Area Network.


And Qumulo has forgotten a key fundamental player in file system and I’m pretty surprise they didn't list Veritas. For people who know, build and play with file system, they know the role of Veritas in that space with VxFS both a technology and market presence. And if you consider the companion volume manager aka VxVM and some file system accelerator options, you get the whole picture, in other words, Veritas has invented everything in that space. Just realize that snapshot existed in 1992, resize file system in both direction – shrink or grow – dynamically again in 1992 among many capabilities. And some interesting flavors like SGI XFS could be also listed when it got introduced with IRIX 5.3 in 1993. I need to probably build the same map I created for object storage and CAS, I'm sure you remember this famous article.

Back to Qumulo, the philosophy is to provide a highly scalable file system cluster deployed in various flavors: on-prem with Qumulo appliances, on-prem software model on commodity servers such HPE Apollo with their dense servers and finally within the cloud in AWS running in EC2. It illustrates perfectly the SDS approach and advantages giving flexibility to deploy on preferred users' models and evolve with it.


Qumulo has chosen to build independent clusters glued together with a data propagation method. Imagine a global environment with a local cluster with Qumulo appliances, a second cluster deployed on Apollo servers and a third running on AWS. The company has developed an Asynchronous Automatic Continuous Replication (AACR) method to distribute data across clusters. AACR is a file-based model that copies the entire file today without deduplication and not block-based yet. With such data copies techniques, Qumulo is able to run jobs in various places on-demand, a pretty clever approach especially for some vertical use cases.

This design invites us for the next remark about the data consistency, Qumulo QF2 is strong consistent within a cluster and eventual consistent across data centers or WAN. Qumulo is also an adept of Paxos in term of consensus protocol.

In term of data protection, started with replication the company has offered for a few quarters now erasure coding and relies on the Intel ISA-L service, pretty good choice finally. With billions of files stored on the platform potentially, modern file systems had to design a new additional element to satisfy operations on metadata. Qumulo built QumuloDB distributed across all cluster nodes, a similar side metadata database model is also used for RozoFS but it is one central database per file system. Imagine a recurrent backup task that wishes to select and protect only the modified files since a certain date, let's in an incremental manner. With small volumes, walking the tree is acceptable but with huge volume and tons of files, this step is a suicide as the task will almost never finished taking too much time and you have to do this protection pretty often. Worse case, similar tasks will be added on the system as tasks take longer than the time interval between backups. Now imagine if all this create/update operations on files are stored on a side fast database you can query to find out the list of file you need to backup, it will be almost magic and super fast, you got it back path files and submit to the backup job. Same remark for archiving, tiering or migration that you need to integrate as a viable solution. This is just an illustration of this kind of service QumuloDB offers, historical and current metadata tracking and storage, with the ability to freeze a version of the database to provide a snapshot mechanism. In other words, a version of the DB is a version of the file system.

And Qumulo appeared recently for the first time in the bizarre Gartner Magic Quadrant about distributed file system and object storage, I wrote a long analysis of it recently in StorageNewsletter, you can read it here and here. Funny analysts associate object storage and distributed file system why not with secondary platform…

Clearly Qumulo is one of the few gems in file storage business with Avere Systems, now a Microsoft company, Elastifile, Panasas, Quantum with StorNext-based offering so now the last Xcellis scale-out NAS iteration, Rozo Systems and WekaIO. They all demonstrated the superiority of their native file-based approach, sometimes with a parallel mode, over things like object storage that is good finally for capacity and long-tem retention but not for high demanding files environments. Some dream about it but the market invite them to consider the business reality.

We expect Qumulo to introduce a geo-dispersed approach even with restrictions and we hope a tiering feature across cluster, cloud… and a new iteration of AACR capability explained above. The company prepares to land in Europe in Q1 2018 and I anticipate a pretty rapid growth there. Honestly the product is strong so no doubt the Seattle company will recruit top guns to rapidly gain market share and penetrate the old continent. We hope to meet them again next year during a future tour to measure progress and to confirm development directions.
Share:

December 21, 2017

Panasas is back

Panasas, historic leader of high performance file storage, has started a new era following several years of redesign and re-architect of its solution.

In fact, the original motivation behind this period was to go beyond traditional HPC and apply scalable file storage to other market segments. In other words, it exists market categories with similar needs where the Panasas’ solution would be a very good fit.

Recently with the IT Press Tour crew, we had the privilege to spend a few hours at the Panasas HQ in Sunnyvale. It was a very interesting session, very interactive and the executive team was very transparent with our team.

Back to the root of the company, Panasas was founded in 1999 in Pittsburgh, PA by Garth Gibson, famous researcher associated with RAID patents. Garth Gibson and his past colleagues had an approach summarized later in the SCSI T10 with Object-based Storage Devices or OSDs. For readers who discovered Panasas the name means Pittsburgh Advanced Network Attached Storage Application Software. So far the company has raised $171 million - last round was in 2013 - and has delivered its product to more than 500 customers in 50 countries. Immediately if we do a simple math of 500/18 we obtain 28 customers meaning on average more than 2 per months during 216 months. Many players in such markets would dream about this number. The mission was and is still to deliver a high performance scale-out NAS solution. The company had several executives for several years changes but Faye Pairman (left on the photo below) is the CEO for now about 7 years. A few members of the current team have in common Adaptec who was a key storage player many years ago.


Initiated and supported by famous US research labs, the company has developed so far a pretty unique solution to address and solve file storage performance challenges in very high demanding IT environments. As already mentioned this story doesn't end with HPC but it’s also a very good fit for several use cases in M&E, Manufacturing, Life Sciences, Education/University and Government and of course Energy. We still don't understand why Gartner has decided to remove Panasas from its “bizarre” Magic Quadrant for Distributed File Systems and Object Storage. Read my comments in the article I published on StorageNewsletter almost 2 months ago.

We have also some remarks about the following picture as Panasas has omitted to list Primary Data, Rozo Systems, Quantum Xcellis scale-out NAS or WekaIO for asymmetric distributed parallel file system, very similar to Panasas PanFS, or Avere Systems, Elastifile or Qumulo for the “classic” NAS play. Panasas sells ActiveStor appliances powered by PanFS to be clear.


Back to the product, it’s fundamental to understand what make different a parallel file system and especially a design philosophy such PanFS. A consumer, i.e client, of the file system is able to send a file to multiple storage targets at the same time splitting the content cross these multiple units. Thus the time to write and read is dramatically reduced. This is very different if you send a file via SMB or NFS as the entire file is sent via only one NAS head. If you wish to do it with NFS, you have to consider pNFS with NFS v4.2, if not, you need a special piece of software embedded in the client machine to understand the interaction between meta-data server(s) and data servers and process I/O operations. A parallel file system can be asymmetric or symmetric, this is just related to the how the metadata server role is operated, again PanFS use an asymmetric model. By the way, Panasas was a key contributor to pNFS, a standardized proposal to extend NFS with this asymmetric mode. I invite you to refer to pNFS.org for more details.

To detail the definition of an asymmetric distributed parallel file system, we need to mention that:
  • asymmetric is the use of side machines acting as metadata servers (this role can also added on data servers),
  • distributed means that the file systems spans and relies of multiple machines and
  • parallel, as explained above, is related to the concurrent consideration of storage targets.
With the current market terminology, we use control plane for the metadata servers and data plane for the data servers.

In 2 words one of the benefits reside in the elapsed time to do I/O operations. If you need T seconds to write a file, you will need only approx. T/10 seconds if you send the same file across 10 back-end servers. And it makes clearly sense when applications consume large files as most of the time is dominated by data I/Os and not metadata I/Os, we see very often a 5-10% in metadata operations and 90-95% in favor of data operations.

Panasas PanFS supports both modes: parallel with the DirectFlow agent and is fully POSIX compliant and NAS with NFS and SMB protocols.


With such performance in critical environments, this kind of platform must provide advanced data protection mechanisms. Panasas offers file-based erasure coding in a N+2 fashion thus tolerating 2 simultaneous drives failures. RAID 6 and other disk-based oriented approaches fail to protect data with limited rebuilt time especially with large drives and for large capacity. For small files and small data volume, file replication across nodes is still a pretty good method.

I/O performance and protection improves with scale as stripe could be larger reducing elapsed operation time.

For PanFS, the team has made great effort to facilitate the management of the platform with an intuitive GUI and console and of course with a powerful CLI.

Panasas has made recently a few announcements:
  • An even more disaggregated architecture with a 2U director blade – you know the famous metadata servers – with 4 nodes in the chassis, it is name ActiveStor Director 100 or ASD-100, pretty well aligned with metadata intensive operations. This ASD-100 node has 8GB NVDIMM for the transaction logs beyond 96GB of DDR4 RAM and 2x40/4x10 GbE Chelsio NIC.
  • A new storage data blade – the ActiveStor Hybrid 100 aka ASH-100 -, hybrid this time, with a choice between HDD and SSD sizes.
  • A new DirectFlow software with 15%+ more B/W and availability yon MacOS in addition to Linux.
  • A new SMB stack coming from Samba with PanFS ACL translations module,
  • And an updated foundation with FreeBSD.
A very good meeting that invites us to anticipate some more good news from Panasas in 2018.
Share:

December 13, 2017

Quantum unveils Xcellis Scale-out NAS

Quantum (NYSE:QTM), famous leader in secondary storage, continues to extend and promote primary storage with part of its portfolio. The company just announced Xcellis Scale-Out NAS as a new iteration of the union of StorNext and Xcellis delivering a high performance highly scalable file storage solution. I invite you to read the full announcement on StorageNewsletter with the associated long comment.

The new solution uses a scale-out approach both for capacity and access as both layers can scale independently. NAS means industry file sharing protocols with NFS and SMB and in that case the file system is exposed through multiple NAS heads but each file is fully write via one head. To leverage the parallelism of the platform, users must use the client software or agent able to split and stripe data across multiple storage targets.

The solution also introduces a good set of data services such automated tiering, encryption, point in time copies, WORM, load balancing and data protection with replication, RAID and erasure coding to list a few. Multiple configurations are possible from full flash to hybrid, entry level and finally an archive model illustrating a wide range of flexible configurations to fit in various environments.

With this announcement, Quantum maintains the contact with the club of top commercial file storage players such Avere Systems, Elastifile, Panasas, Qumulo, Rozo Systems and WekaIO.

We understand that the company must react to some revenue erosion for several quarters even years and the recent departure of his long time CEO John Gacek and CTO Bassam Tabbara. FY 2018 will be interesting to watch.

Share:

December 12, 2017

New Edge filer from Avere

Avere Systems, one of the few file storage gem companies, continues to release product at an interesting pace. The company just announced the FXT 5850 with double the DRAM and SSD capacity, 2.5 times higher network bandwidth and finally 2x the performance if we compare with the previous model.

You can configure a cluster of 24 nodes, I let you imagine the capacity an performance you can achieve in that case, configurations are fully redundant to avoid any impact on production, this is the case with the failover capability and mirrored writes.

Recognized by the performance and flexibility, Avere marks a new milestones with a formula 1 product perfectly aligned with the high demanding characteristic of some verticals segments that need high data capacity and high speed at the same time.

The Avere FXT 5850 starts at $211,500 and is available now. Huge achievement that maintains Avere in the top file storage club.
Share:

November 27, 2017

SuperComputing 17 was a good conference

Super Computing 2017 is always a very interesting conference and you see very often technologies that will arrive in more classic IT a few years later.

I invite you to read the long summary I wrote for StorageNewsletter available here. And in a nutshell a few points below.

Topics were about GPU, of course, burst buffers and fast I/O, file systems/storage, NVMe and composable infrastructure.

The organization also unveiled the new Top500 ranking and introduced the new IO500.

A lot of Lustre-and Spectrum Scale-based file storage solution of course, Quantum with StorNext of course, but also Rook, a multi-protocol SDS - file, block and object - product for massive volume of data, based on Ceph. We saw Panasas as well, the company has announced the 7.0 major PanFS release and a disaggregated model. Vexata demonstrated a file-based solution running Spectrum Scale and of course companies such DDN, NetApp, HPE, Cray, IBM, Dell...

Among the new file storage vendors or vendors with a innovative distributed file system, we noticed the presence of Avere Systems, Elastifile, Panasas and Qumulo but Rozo Systems and Weka IO didn't have a booth.

On the object storage side, it was limited as pure players were pretty much absent like Cloudian, except Caringo.

It confirms two things again: object storage is a capacity tier and file access is king.

Next year the event will take place in Dallas, TX, from 12 to 15 of November, 2018.


Share: