Bridging today’s data solutions with tomorrow's


By Rob Makin, Group Director, Data Centre Group, Lenovo ANZ
Monday, 12 June, 2017


Bridging today’s data solutions with tomorrow's

A famous company once coined the phrase: “The whole world is… data!” Indeed, everything ties back to data, whether we’re talking about YouTube, Facebook, Uber, Deliveroo, SAP, high-performance computing (HPC) or databases. Today’s data has grown exponentially due to the rise of Internet of Things, social media and big data.

To truly benefit from the new knowledge economy, organisations need to know how to manage data effectively and to analyse and extract meaningful intelligence out of it. The possibilities are endless. For instance, LinkedIn uses user-generated Twitter feeds to forecast market demand and trends, providing inspiration for companies to innovate and develop new products; and Toyota is leveraging virtual reality to let its customers test-drive new vehicles without having to manufacture costly prototypes.

In general, data that is frequently accessed is termed ‘hot data’. Examples include databases, enterprise resource planning systems and web pages. With flash drives being priced more competitively nowadays, the adoption of all-flash arrays for these applications has become a reality.

The real growth of flash adoption, however, came with hybrid arrays, which deployed tiering between flash and traditional disk drives. With the arrival of storage virtualisation, enterprises can achieve this completely via software.

Data that is infrequently accessed is called ‘cold data’. It can exist either in structured form — such as backup and archival data, which most enterprises are using — or unstructured form, for items like large videos, pictures and blogs. Such data can range from small file sizes of a few kilobytes to large terabyte file sizes, or from a few hundred files upwards to billions or trillions of files.

Managing millions of small files is altogether a different matter from managing a smaller number of ultralarge ones. While both cases can take up the same storage capacity, older architectures are unable to support the high level of granularity involved from a data management standpoint. Therefore, software-defined storage (SDS) and object storage are introduced to address this trend.

Unlike object storage, it is equally challenging to manage traditional file and block storage, especially in large capacities, due to the escalating costs of storage area networks. SDS, with its scale-out architectures, can help to reduce the costs of storage significantly.

For homogenous workloads where performance, capacity and scale are needed, we’re seeing specific HPC workloads like machine learning systems that require not just petaflops of performance, but also petabytes of data.

To cope with the different types of storage data, the industry has to embrace a more granular approach on how it addresses different workloads. Even as new software-defined technologies are being rapidly developed, enterprises will still need to bridge the traditional with the software-defined solutions of tomorrow.

Rob Makin is Director, Data Centre Group, Lenovo Australia and New Zealand, responsible for business growth and the education of the market regarding the company’s enterprise offering.

Image courtesy Lenovo

Follow us on Twitter and Facebook

Related Articles

Making sure your conversational AI measures up

Measuring the quality of an AI bot and improving on it incrementally is key to helping businesses...

Digital experience is the new boardroom metric

Business leaders are demanding total IT-business alignment as digital experience becomes a key...

Data quality is the key to generative AI success

The success of generative AI projects is strongly dependent on the quality of the data the models...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd