Thursday, July 23, 2015

How does YARN compare with Mesos?

Both systems have the same goal: 

Allowing you to share a large cluster of machines between different frameworks. 

Yarn:
For those who don't know, NextGen MapReduce is a project to factor the existing MapReduce into a generic layer that handles distributed process execution and resource scheduling (this system is called YARN) and then implement MapReduce as an application on top of this.

Mesos:- 

Mesos was originally an academic research project with a very similar goal. They created a system which could run a patched version of Hadoop, MPI and other things. This has grown into an Apache Incubator project in its own right.

I have been looking into these two a bit because we would love something like this at LinkedIn, and the nature of these things is that you really only want one (since you want to run everything on it). So at the moment we don't have any real experience running stuff on top of either of these, but here is what I have pieced together (may be wrong in places):

Nextgen MapReduce (aka YARN) is primarily written in java with bits of native code. Mesos is primarily written in C++. YARN only handles memory scheduling (e.g. you request x containers of y MB each), but with plans to extend it to other resources. Mesos handles both memory and CPU scheduling. In practice I think the OS handles CPU scheduling pretty well so I am not sure that would help our use cases. 

Supporting some kind of disk space and disk I/O scheduling and enforcement would be super cool, Mesos uses Linux container groups (http://lxc.sourceforge.net), and YARN uses simple unix processes. 

Linux container groups are a stronger isolation but may have some additional overhead.The resource request model is weirdly backwards in Mesos. In YARN you (the framework) request containers with a given specification and give locality preferences. 

In Mesos you get resource "offers" and choose to accept or reject those based on your own scheduling policy. The Mesos model is a arguably more flexible, but seemingly more work for the person implementing the framework.YARN is a pretty epic chunk of code, including all kinds of things right down to its own web framework. It is about 3x as much code as Mesos.

YARN integrates something similar to the pluggable schedulers everyone knows and loves/hates in Hadoop. So if you are used to the capacity scheduler, hierarchical queues, and all that, you can get something similar. 

I don't think the Mesos scheduling capabilities are quite as robust (they list hierarchical scheduling on their roadmap).YARN integrates with Kerberos and essentially inherits the Hadoop security architecture

I don't think Mesos attempts to deal with security.YARN directly handles rack and machine locality in your requests, which is convenient. In Mesos you can implement this, but it is less out of the box. 

Mesos is much more mature as a project at this point. It is a standalone thing, with great documentation, and good starter examples. YARN exists only on hadoop trunk (and some feature branches) in the mapreduce directory, and the docs are super sparse. 

Framework comparisons:-

Mesos is a meta, framework scheduler rather than an application scheduler like YARN.
Suppose we want to run 1000 MapReduce jobs and 1000 SPARK jobs in Mesos. First we need to set up the Hadoop and SPARK in Mesos, then we submit each job to its corresponding framework. Hadoop will schedule the 1000 MapReduce jobs and SPARK will schedule the 1000 SPARK jobs. If we want to run the 1000 MapReduce jobs in multiple Hadoop frameworks, we need to manually set up more Hadoop instances, then we decide which Hadoop instance each job is submit to. Overall, we submit jobs directly to the frameworks and Mesos is not aware of the jobs; we are responsible to set up frameworks.

In YARN, we can submit all the 2000 jobs to YARN, which will launch either a Hadoop or SPARK instance for each of the job. YARN will schedule all the 2000 jobs together, given their resource requirements. After one job is done, YARN will shutdown the corresponding Hadoop or SPARK instance. Overall, we submit all jobs to YARN and YARN schedules all of them; YARN is responsible to set up the "one-time framework" for each job.


Mesos For IOT Applications:-

YARN is Hadoop-specific and is, therefore, specifically targeted at scheduling Hadoop-style, map-reduced data-driven workloads.  In contrast, Mesos can run any kind of workload, including frameworks that are not built on top of Hadoop, such as a Ruby or Python app. One of the most common use-cases for Mesos is running web applications and other long-running services in both single-framework and multi-framework environments. Developers choose Mesos for it's scheduling and bin-packing capabilities, but also for it's ease of deployment, fault-tolerance and app portability.
My prediction is that YARN will find it's rightful place as a next-generation scheduler for Hadoop-driven workloads, whereas Mesos will redefine how we build the next-generation of distributed web, mobile and IoT applications.

Conclustion:-

There seems to be a lot of momentum, it is just early.YARN is going to be the basis for Hadoop MapReduce going forward, so if you have a big Hadoop cluster and want to be able to run other stuff on it, that is likely appealing and will probably work more transparently than Mesos. YARN was written by the Yahoo/HortonWorks Hadoop team which has should know a thing or two about multi-tenancy and very large-scale cluster computing. YARN is not yet in a stable Hadoop release so I am not sure how much actual testing it has had or the extent of deployment internally at Yahoo. 

Regardless, if/when the YARN team is able to get the majority of the worlds Hadoop clusters successfully running on top of YARN, that will likely get the project to a level of hardening that will be hard to compete . Mesos ships with a number of out-of-the-box frameworks ported to it. This somewhat helps to validate the generality of their framework, but i don't know how much of a hack the various ports of things to it are.