What is Big Data?

We come across data in every possible form, whether through social media sites, sensor networks, digital images or videos, cellphone GPS signals, purchase transaction records, web logs, medical records, archives, military surveillance, e-commerce, complex scientific research and so on…it amounts to around some Quintilian bytes of data! This data is what we call as…BIG DATA!

Big Data is nothing but an assortment of such a huge and complex data that it becomes very tedious to capture, store, process, retrieve and analyze it with the help of on-hand database management tools or traditional data processing techniques.

3vsOfBigData

  1. Volume: BIG DATA is clearly determined by its volume. It could amount to hundreds of terabytes or even petabytes of information. For instance, 15 terabytes of Facebook posts or 400 billion annual medical records could mean Big Data!
  2. Velocity: Velocity means the rate at which data is flowing in the companies. Big data requires fast processing. Time factor plays a very crucial role in several organizations. For instance, processing 2 million records at share market or evaluating results of millions of students applied for competitive exams could mean Big Data!
  3. Variety: Big Data may not belong to a specific format. It could be in any form such as structured, unstructured, text, images, audio, video, log files, emails, simulations, 3D models, etc. New research shows that a substantial amount of an organization’s data is not numeric; however, such data is equally important for decision-making process. So, organizations need to think beyond stock records, documents, personnel files, finances, etc.

So, How to handle this big data?

Simple! using Hadoop.

What is Hadoop?

Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware. It provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs.

Why is Hadoop important?

why-hadoop-is-important

  1. Ability to store and process huge amounts of any kind of data, quickly. With data volumes and varieties constantly increasing, especially from social media and the Internet of Things (IoT), that’s a key consideration.
  2. Computing power. Hadoop’s distributed computing model processes big data fast. The more computing nodes you use, the more processing power you have.
  3. Fault tolerance. Data and application processing are protected against hardware failure. If a node goes down, jobs are automatically redirected to other nodes to make sure the distributed computing does not fail. Multiple copies of all data are stored automatically.
  4. Flexibility. Unlike traditional relational databases, you don’t have to preprocess data before storing it. You can store as much data as you want and decide how to use it later. That includes unstructured data like text, images and videos.
  5. Low cost. The open-source framework is free and uses commodity hardware to store large quantities of data.
  6. Scalability. You can easily grow your system to handle more data simply by adding nodes. Little administration is required.

Hadoopers! Welcome.