The clojure-hadoop library is an excellent facility for running Clojure functions within the Hadoop MapReduce framework. At Compass Labs we’ve been using its job abstraction for elements of our production flow and found a number of limitations that motivated us to implement extensions to the base library. We’ve promoted this for integration into a new release of clojure-hadoop which should issue shortly.
There are still some design and implementation warts in the current release which should be fixed by ourselves or other users in the coming weeks.
Continue reading “Steps and Flows: Higher-order Composition for Clojure-Hadoop”
I have a bad tendency in my research work to write my own code and libraries from scratch, in large part because I’ve decided to keep most of my coding in Common Lisp to leverage prior tools. However, I’ve recently been given a painful demonstration of how it is often faster to pay the up-front cost to learn the right tool than to rewrite (and maintain) the subsets you think you need. For example, I found myself venturing into Clojure/Java/Hadoop for my commercial work this year as a compromise between Lisp / dynamic language features and integration benefits. This week I’m finding the need to do some rather sophisticated work with graphical models and I need some tools to build and evaluate them.
I’ve looked at a wide variety of open source approaches such as Samiam (no continuous variables), WinBUGS (only windows), OpenBUGS (not quite ready), HBC (inference only), Mallett (OK, but I don’t like Java and doesn’t support all forms of real-valued random variables), Incanter (limited but growing support for graphical models) and R.
Continue reading “Learning Bayesian statistics with R”
Compass Labs is a heavy user of Clojure for analytics and data process. We have been using Stuart Sierra’s excellent clojure-hadoop package for running a wide variety of algorithms derived from dozens of different Java libraries over our datasets on large Elastic MapReduce clusters.
The standard way to build a Jar for mapreduce is to pack all the libraries for a given package into a single jar and upload it to EMR. Unfortunately, building uberjars for Hadoop is a mallet when a small tap is needed.
We recently reached a point where the overhead of uploading large jars causes a noticeable slow down in our development cycle, especially when launching jobs on connections with limited upload bandwidth and with the slower uberjar creation of lein 1.4.
There are (at least) two solutions to this:
- Build smaller jars by having projects with dependencies specific to a given job
- Cache the dependencies on the cluster and send over your dependency list and pull the set you need into the Distributed Cache at job configuration time.
To allow us to continue to package all our job steps in a single jar, source tree and lein project, we opted for the latter solution which I will now describe.
Continue reading “Streamlining Hadoop and Clojure with Maven”