Skip to main content

Data is the New oil of Industry?

Let's go back to 18th century ,when development was taking its first footstep.The time when oil was considered to be the subset of industrial revolution. Oil than tends to be the most valuable asset in those time. Now let's come back in present. In 21st century, data is vigorously called the foundation of information revolution. But the question that arises is why are we really calling data as the new oil. Well for it's explanation


Now we are going to compare Data Vs Oil

  1. Data is an essential resource that powers the information economy in much the way that oil has fueled the industrial economy.
  2. Once upon a time, the wealthiest were those with most natural resources, now it’s knowledge economy, where the more you know is proportional to more data that you have.
  3. Information can be extracted from data just as energy can be extracted from oil.
  4. Traditional Oil powered the transportation era, in the same way that Data as the new oil is also powering the emerging transportation options like driverless cars and hyperloop (1200km/hr) which are based on advanced synthesis of data inform of algorithms and cognitive knowledge without use of fossil fuel.
  5. Traditional oil is finite, Data availability seems infinite.
  6. Data flows like oil but we must “drill down” into data to extract value from it. Data promises a plethora of new uses — diagnosis of diseases, direction of traffic patterns, etc. — just as oil has produced useful plastics, petrochemicals, lubricants, gasoline, and home heating.
  7. Oil is a scarce resource. Data isn’t just abundant, it is a cumulative resource.
  8. If Oil is being used, then the same oil cannot be used somewhere else because it’s a rival good. This results in a natural tension about who controls oil. If Data is being used, the same Data can be used elsewhere because it’s a non-rival good.
  9. As a tangible product, Oil faces high friction, transportation and storage costs. As an intangible product, Data has much lower friction, transportation and storage costs.
  10. The life cycle of Oil is defined by process: extraction, refining, distribution. The life cycle of Data is defined by relationships: with other data, with context and with itself via feedback loops.
Data is valuable, and can be ‘mined’ and refined, like oil. But there are many differences where the analogy breaks down:
  • Oil is a finite resource that that we are drawing down on. Data is growing at an exponential rate.
  • Oil is consumed when it is used. Data is not. We can make copies of data.
  • Oil is stored physically and is not easily replicable. Data is stored digitally and is readily replicated.
  • Oil is a commodity. Data is highly context dependent.
  • There are lots of other analogies for data as well. For example:
  1. Data is like currency (a medium for exchange, when we exchange our data for ‘free’ services)
  2. Data is like water (abundant and essential for our survival, but requiring cleaning)
  3. Data is a weapon (dormant, but with the potential to cause harm)
  4. However, for all of these, they only show some aspects of data while editing out the others. Ultimately, all analogies break down and it may be futile looking for a single phrase to capture the multi-faceted nature of data.
As Per me this is a Subjective , everyone has own explanation ðŸ˜‰

Happy Learning...!!

Comments

  1. Very Informative and creative contents. This concept is a good way to enhance the knowledge. thanks for sharing. Continue to share your knowledge through articles like these, and keep posting more blogs. visit below for

    Data Engineering Solutions 

    AI & ML Service

    Data Analytics Solutions

    Data Modernization Solutions

    ReplyDelete

Post a Comment

Popular posts from this blog

Important Python Libraries for Data Science

Python is the most widely used programming language today. When it comes to solving data science tasks and challenges, Python never ceases to surprise its users. Most data scientists are already leveraging the power of Python programming every day. Python is an easy-to-learn, easy-to-debug, widely used, object-oriented, open-source, high-performance language, and there are many more benefits to Python programming.People in Data Science definitely know about the Python libraries that can be used in Data Science but when asked in an interview to name them or state its function, we often fumble up or probably not remember more than 5 libraries. Important Python Libraries for Data Science: Pandas NumPy SciPy Matplotlib TensorFlow Seaborn Scikit Learn Keras 1. Pandas Pandas (Python data analysis) is a must in the data science life cycle. It is the most popular and widely used Python library for data science, along with NumPy in matplotlib. With around 17,00 comments on GitH...

Daily Task performed by Data Scientist at Work place - Life of a Data Scientist

Data Science is a multidimensional field that uses scientific methods, tools, and algorithms to extract knowledge and insights from structured and unstructured data.But in reality, he does so much more than just studying the data. I agree that all his work is related to data but it involves a number of other processes based on data.Data Science is a multidisciplinary field. It involves the systematic blend of scientific and statistical methods, processes, algorithm development and technologies to extract meaningful information from data. The average Data Scientist’s work week as follows: Typical work weeks devour around 50 hours. The Data Scientists generally maintain internal records of daily results. The Data Scientists also keep extensive notes on their modeling projects for repeatable processes. The good Data Scientists can begin their career with a $80k salary, and the high-end experts can hope to make $400K. The industry attrition rate for DS is high as organizations fre...

Why Central Limit Theorem is Important for evey Data Scientist?

The Central Limit Theorem is at the core of what every data scientist does daily: make statistical inferences about data. The theorem gives us the ability to quantify the likelihood that our sample will deviate from the population without having to take any new sample to compare it with. We don’t need the characteristics about the whole population to understand the likelihood of our sample being representative of it. The concepts of confidence interval and hypothesis testing are based on the CLT. By knowing that our sample mean will fit somewhere in a normal distribution, we know that 68 percent of the observations lie within one standard deviation from the population mean, 95 percent will lie within two standard deviations and so on. In other words we can say " It all has to do with the distribution of our population. This theorem allows you to simplify problems in statistics by allowing you to work with a distribution that is approximately normal."  The CLT is...