Introduction to Distributed file system (DFS)
02
Dec
1
Docile Your Big Data Using Hadoop
05
May
The beginning of Apache Hadoop cropped up in 2003. Distributed applications that process extremely massive amount of data is developed and carried out by Hadoop. It stores data as well as runs applications on clusters of commodity hardware and gives huge storage for any sort of data, great power of processing plus the capacity to deal with virtually unlimited simultaneous tasks or jobs. Further, it was planned to step up from solo servers to thousands of machines where every single machine will offer local computation and space. And, instead of depending on hardware to give great level of accessibility, the Apache Hadoop software library is aimed to identify and deal with failures at the application layer.
An introduction to Hadoop
29
Apr
3
This article is an introduction to Hadoop. It will help you know more about Hadoop and what is actually is. The article will delve a bit into the history and different versions of Hadoop. For the un-initiated, it will also look at high level architecture of Hadoop and its different modules.
Tags:
Big Data,
Cluster,
Distributed File System,
Hadoop,
Hadoop Architecture,
HDFS,
MapReduce,
YARN,
An introduction to Big Data
14
Apr
Big data is an evolving topic. The new technology is not suitable for standard simple relational databases, but it is used for collecting, storing and processing large amounts of data. Correct implementation of Big Data technologies can benefit various organizations to analyze their data and improve business decisions.