Pets versus cattle - help me grasp this

Latest response

I've looked at the presentations but I don't get it. Pets are virtual machines that have a long life, typically measured in years. They're like good old fashioned physical machines except they're virtual. The file server is a pet. The database server is a pet. Each pet has individual characteristics and is unique.

Cattle are virtual machines with a lifetime of maybe a few minutes, maybe a few hours. Cattle are not unique and we create and destroy them with APIs. So we have an application running somewhere that creates virtual machines, orchestrates some kind of workload, then destroys them when they're no longer needed. If I need to, say, compile 1000 different program modules, I can create 1000 virtual machines each compiling a module. If I have enough raw capacity somewhere, I can do that job 1000 times faster than trying it with one bare metal box.

So that's the idea. I get this much.

So then I see presentations that say the traditional workload based on the pet approach will go away over time, to be replaced with workloads based on the cattle approach. It's the future. It's what cloud is really all about. Cloud isn't about where the VMs are hosted, it's about using cattle to run the workload instead of pets. With the cattle approach, I can quickly scale up and down (what's the word for "stretchy"?) and I can deploy my cattle in any number of hosting scenarios - local or with different hosting providers.

OK. So drilling down - how would you run, say, an ERP workload using the cattle approach? You have one database with all your part numbers and customer info and procedures, and a bunch of threads pound away on that database. How do lots of cattle that dynamically appear and disappear do it better than a few pets? Some how, some way, you still have to deal with business rules, you still have all the other computer science issues going on. What advantage does the cattle approach offer?

Let's say you need to "stretch" (I still don't remember the official word) your capacity so you create a bunch of cattle VMs out in, say, AWS or Rack Space or somewhere like that. But the database lives here at corporate. Is there some telecom magic I missed that will give this remote VM the same access to the database as the local VMs? Ah - but wait - put your database inside a Gluster file system and then it will replicate all over the place. Well, OK, maybe. Except Gluster doesn't work well with databases. And you still have a bunch of telecom issues to solve, but now they're with replication. And I don't have a clue how you would deal with stuff like database locking. What if one of the cattle at corporate wants to touch the same record as one of the cattle out at AWS? They're both touching their own copies - how do you keep them from stomping on each other when they replicate back and forth?

So the idea is, with the cattle approach, you don't have to buy all that processing capacity to service peak loads, you grow and shrink it as needed. But by the time you buy all the telecom capacity to copy everything all over the place, and come up with some kind of scheme to keep all the locks straight, how much of a win is it really? Seems to me, you're still buying enough capacity to service a peak workload, just in a different form.

Or maybe I'm missing something. Help me out.

  • Greg

Responses