I've been looking at resource allocation problems within data-centers lately. I think it's unlikely that someone working in this area does not come across the concept of Pareto optimality, so I thought I'd finally clear the cobwebs off this blog by writing about this.

Pareto optimality is an economics concept developed by Vilfredo Pareto that has important applications in computer science.


"You're welcome, computer nerds" - Vilfredo Pareto



Simply said, in a Pareto optimal allocation of resources between multiple parties, no single party can be made better off without making at least another party worse off.

Here's an example: if we have a pie and we divide it equally between Alice and Bob, it's impossible to improve the share for either one of them without negatively impacting the other. So if we divide the pie into two equal halves and give a half each to Alice and Bob, this is a Pareto optimal outcome.

So is Pareto optimality always desirable? Heck no.

This is because Pareto optimality doesn't tell you anything about fairness, which is also a very desirable property when it comes to resource allocation. In our Alice and Bob example, consider an allocation where the entire pie is given to Alice and Bob doesn't get any. Is it fair to Bob? Of course it isn't. Is it Pareto optimal? Yes, because you cannot increase Bob's share of the pie without taking an equivalent share away from Alice.

So what exactly is optimal about a Pareto efficient allocation? To put it naively, it is an allocation where utilisation of available resources is the focus. This is quite natural considering the fact that the concept was designed within economic theory because in an economic system, it is not desirable that supply is under-utilised when there are still demands to be met.

In most practical situations involving computer systems, resources are allocated incrementally between different individuals (as opposed to a one time allocation). As an example, in an IaaS setup like Amazon EC2, the system needs to provision resources on a continuous basis by keeping up with incoming demands by different tenants. In such a scenario, you're matching a pool of resources (like compute cores, disk, etc.) to an incoming demand vector from a tenant on the fly. Now if this tenant's demand is met such that all other tenants' allocations are at least as good as they were beforethen such an allocation is considered a Pareto improvement.

Like the notion of Pareto optimality, Pareto improvements are also slightly weak. For instance, making a few allocations that aren't Pareto improvements as of now may allow the system to reach a much more desirable situation later.

Going back to our pie example, assume we're in a scenario where Alice has been allocated the entire pie. Assuming the pie is big enough, it's unlikely that Alice would be able to eat the entire pie in one sitting without falling sick. Now if we take all that extra pie from Alice and give it to Bob, it isn't a Pareto improvement because Bob is now better off at the immediate cost of Alice, but Alice is eventually better off by not falling sick. This is now a much more desirable outcome, and is also Pareto optimal. Similarly, your operating system's scheduler may have to slow down your massive compilation process if you want to be able to watch a youtube video at the same time.

The set of Pareto optimal choices in a system is called the Pareto frontier. Given a finite set of data-points, one can compute the Pareto frontier through skyline query algorithms.

I'll stop here for now because my stomach's expressing its disgruntlement at being denied food for a longer period than usual. In a future post, I'll talk about Pareto optimality and its relation to Nash Equilibria.