Connect and Follow Me

Thursday, July 23, 2015

How does YARN compare with Mesos?

Both systems have the same goal: 

Allowing you to share a large cluster of machines between different frameworks. 

For those who don't know, NextGen MapReduce is a project to factor the existing MapReduce into a generic layer that handles distributed process execution and resource scheduling (this system is called YARN) and then implement MapReduce as an application on top of this.


Mesos was originally an academic research project with a very similar goal. They created a system which could run a patched version of Hadoop, MPI and other things. This has grown into an Apache Incubator project in its own right.

I have been looking into these two a bit because we would love something like this at LinkedIn, and the nature of these things is that you really only want one (since you want to run everything on it). So at the moment we don't have any real experience running stuff on top of either of these, but here is what I have pieced together (may be wrong in places):

Nextgen MapReduce (aka YARN) is primarily written in java with bits of native code. Mesos is primarily written in C++. YARN only handles memory scheduling (e.g. you request x containers of y MB each), but with plans to extend it to other resources. Mesos handles both memory and CPU scheduling. In practice I think the OS handles CPU scheduling pretty well so I am not sure that would help our use cases. 

Supporting some kind of disk space and disk I/O scheduling and enforcement would be super cool, Mesos uses Linux container groups (, and YARN uses simple unix processes. 

Linux container groups are a stronger isolation but may have some additional overhead.The resource request model is weirdly backwards in Mesos. In YARN you (the framework) request containers with a given specification and give locality preferences. 

In Mesos you get resource "offers" and choose to accept or reject those based on your own scheduling policy. The Mesos model is a arguably more flexible, but seemingly more work for the person implementing the framework.YARN is a pretty epic chunk of code, including all kinds of things right down to its own web framework. It is about 3x as much code as Mesos.

YARN integrates something similar to the pluggable schedulers everyone knows and loves/hates in Hadoop. So if you are used to the capacity scheduler, hierarchical queues, and all that, you can get something similar. 

I don't think the Mesos scheduling capabilities are quite as robust (they list hierarchical scheduling on their roadmap).YARN integrates with Kerberos and essentially inherits the Hadoop security architecture

I don't think Mesos attempts to deal with security.YARN directly handles rack and machine locality in your requests, which is convenient. In Mesos you can implement this, but it is less out of the box. 

Mesos is much more mature as a project at this point. It is a standalone thing, with great documentation, and good starter examples. YARN exists only on hadoop trunk (and some feature branches) in the mapreduce directory, and the docs are super sparse. 

Framework comparisons:-

Mesos is a meta, framework scheduler rather than an application scheduler like YARN.
Suppose we want to run 1000 MapReduce jobs and 1000 SPARK jobs in Mesos. First we need to set up the Hadoop and SPARK in Mesos, then we submit each job to its corresponding framework. Hadoop will schedule the 1000 MapReduce jobs and SPARK will schedule the 1000 SPARK jobs. If we want to run the 1000 MapReduce jobs in multiple Hadoop frameworks, we need to manually set up more Hadoop instances, then we decide which Hadoop instance each job is submit to. Overall, we submit jobs directly to the frameworks and Mesos is not aware of the jobs; we are responsible to set up frameworks.

In YARN, we can submit all the 2000 jobs to YARN, which will launch either a Hadoop or SPARK instance for each of the job. YARN will schedule all the 2000 jobs together, given their resource requirements. After one job is done, YARN will shutdown the corresponding Hadoop or SPARK instance. Overall, we submit all jobs to YARN and YARN schedules all of them; YARN is responsible to set up the "one-time framework" for each job.

Mesos For IOT Applications:-

YARN is Hadoop-specific and is, therefore, specifically targeted at scheduling Hadoop-style, map-reduced data-driven workloads.  In contrast, Mesos can run any kind of workload, including frameworks that are not built on top of Hadoop, such as a Ruby or Python app. One of the most common use-cases for Mesos is running web applications and other long-running services in both single-framework and multi-framework environments. Developers choose Mesos for it's scheduling and bin-packing capabilities, but also for it's ease of deployment, fault-tolerance and app portability.
My prediction is that YARN will find it's rightful place as a next-generation scheduler for Hadoop-driven workloads, whereas Mesos will redefine how we build the next-generation of distributed web, mobile and IoT applications.


There seems to be a lot of momentum, it is just early.YARN is going to be the basis for Hadoop MapReduce going forward, so if you have a big Hadoop cluster and want to be able to run other stuff on it, that is likely appealing and will probably work more transparently than Mesos. YARN was written by the Yahoo/HortonWorks Hadoop team which has should know a thing or two about multi-tenancy and very large-scale cluster computing. YARN is not yet in a stable Hadoop release so I am not sure how much actual testing it has had or the extent of deployment internally at Yahoo. 

Regardless, if/when the YARN team is able to get the majority of the worlds Hadoop clusters successfully running on top of YARN, that will likely get the project to a level of hardening that will be hard to compete . Mesos ships with a number of out-of-the-box frameworks ported to it. This somewhat helps to validate the generality of their framework, but i don't know how much of a hack the various ports of things to it are.

Saturday, May 2, 2015

What is the difference between docker and Lxc?

From the launch of docker Top cloud service providers launched their container services for enterprises.

But few of them are still hang out with basic questions like What is docker, difference between docker and LXC, docker and VM.

So in this post we explore the actual differences between docker and LXC.

Docker is not a replacement for lxc. "lxc" refers to capabilities of the linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations.
lxc vs docker

On top of this low-level foundation of kernel features, Docker offers a high-level tool with several powerful functionalities:
  • Portable deployment across machines. Docker defines a format for bundling an application and all its dependencies into a single object which can be transferred to any docker-enabled machine, and executed there with the guarantee that the execution environment exposed to the application will be the same. Lxc implements process sandboxing, which is an important pre-requisite for portable deployment, but that alone is not enough for portable deployment. If you sent me a copy of your application installed in a custom lxc configuration, it would almost certainly not run on my machine the way it does on yours, because it is tied to your machine's specific configuration: networking, storage, logging, distro, etc. Docker defines an abstraction for these machine-specific settings, so that the exact same docker container can run - unchanged - on many different machines, with many different configurations.
  • Application-centric. Docker is optimized for the deployment of applications, as opposed to machines. This is reflected in its API, user interface, design philosophy and documentation. By contrast, the lxc helper scripts focus on containers as lightweight machines - basically servers that boot faster and need less ram. We think there's more to containers than just that.
  • Automatic build. Docker includes a tool for developers to automatically assemble a container from their source code, with full control over application dependencies, build tools, packaging etc. They are free to use make, maven, chef, puppet, salt, debian packages, rpms, source tarballs, or any combination of the above, regardless of the configuration of the machines.
  • Versioning. Docker includes git-like capabilities for tracking successive versions of a container, inspecting the diff between versions, committing new versions, rolling back etc. The history also includes how a container was assembled and by whom, so you get full traceability from the production server all the way back to the upstream developer. Docker also implements incremental uploads and downloads, similar to "git pull", so new versions of a container can be transferred by only sending diffs.
  • Component re-use. Any container can be used as an "base image" to create more specialized components. This can be done manually or as part of an automated build. For example you can prepare the ideal python environment, and use it as a base for 10 different applications. Your ideal postgresql setup can be re-used for all your future projects. And so on.
  • Sharing. Docker has access to a public registry ( where thousands of people have uploaded useful containers: anything from redis, couchdb, postgres to irc bouncers to rails app servers to hadoop to base images for various distros. The registry also includes an official "standard library" of useful containers maintained by the docker team. The registry itself is open-source, so anyone can deploy their own registry to store and transfer private containers, for internal server deployments for example.
  • Tool ecosystem. Docker defines an API for automating and customizing the creation and deployment of containers. There are a huge number of tools integrating with docker to extend its capabilities. PaaS-like deployment (Dokku, Deis, Flynn), multi-node orchestration (maestro, salt, mesos, openstack nova), management dashboards (docker-ui, openstack horizon, shipyard), configuration management (chef, puppet), continuous integration (jenkins, strider, travis), etc. Docker is rapidly establishing itself as the standard for container-based tooling.