Apache Spark is a lightning-fast cluster computing designed for fast
computation. It was built on top of Hadoop MapReduce and it extends the
MapReduce model to efficiently use more types of computations which
includes Interactive Queries and Stream Processing. This is a brief
tutorial that explains the basics of Spark Core programming.
No comments:
Post a Comment