Distributed image store

You have a system to collect various image files over FTP, cropping, scaling and storing them with their metadata and maybe you even store those files away from your application server in a NAS or even another custom solution.

Problem

  • Far too many images so the storage looks like a bunch of servers in full racks.
  • People start to talk about node redundancy.
  • Image processing requires dedicated servers.

Solution

This is actually an extension to the auto-leveling image store. Basic idea is the same in that the sources send images directly to our solution and it relays the required metadata to the indexer. Please go ahead and read that if you haven’t already.

In that solution, images are stored in servers and RAID technologies provide the reliability required. Just like a system needs RAID or RAID-like techniques to tolerate drive failures when there are a number of drives; it needs a distributed storage to tolerate node failures when there are a number of nodes.

Simply put, standard RAID systems - or NAS solutions relying on those or even some SANs, are great as long as your needs can fit in a single unit. In theory a single unit can carry thousands of drives, but in practice more than a hundred is pushing it.

distributed image store

Ceph is a distributed storage system that stores the data over a number of nodes, and have methods to ensure that the data is safe even when some of the nodes fail. And for this solution it was the perfect choice. In order to keep things compatible between the auto-leveling image store:

  • we retained the input and output interfaces
  • ensured a running ceph cluster over dozens of nodes containing hundreds of disks
  • configured ceph erasure-coded pools and RGW interfaces
  • changed the image storage from file-based operations to the HTTP-based operations
  • modify image processing servers to read from that instead

This also means the main problem of being unable to plan for the required capacity - so the required flexibility, is handled in a different way. It’s possible for ceph to grow the cluster using the same or different types of drives or nodes. Ceph migrates some part of the data to the newly added nodes to keep things in balance. Or it’s possible to shrink the cluster and let ceph move/recreate the necessary parts in the remaining nodes. All handled automatically without any interruption to the applications.

This solution now powers some of the busiest locations storing millions of new images everyday on dozens of petabytes of raw storage.

More info
Join us