Hadoop works on MapReduce, and it is devised by the Google.
Map reduce is an algorithm or concept to process Huge amount of data in a faster way. As per its name you can divide it Map and Reduce.
- The main MapReduce job usually splits the input data-set into independent chunks. (Big data sets in the multiple small datasets)
- Map Task: will process these chunks in a completely parallel manner (One node can process one or more chunks).
- The framework sorts the outputs of the maps.
- Reduce Task: And the above output will be the input for the Reduced Task, produces the final result.
Your business logic would be written in the MappedTask and ReducedTask.
Typically both the input and the output of the job are stored in a file-system (Not database). The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.