Skip to main content

Why Central Limit Theorem is Important for evey Data Scientist?

The Central Limit Theorem is at the core of what every data scientist does daily: make statistical inferences about data.

The theorem gives us the ability to quantify the likelihood that our sample will deviate from the population without having to take any new sample to compare it with. We don’t need the characteristics about the whole population to understand the likelihood of our sample being representative of it.

The concepts of confidence interval and hypothesis testing are based on the CLT. By knowing that our sample mean will fit somewhere in a normal distribution, we know that 68 percent of the observations lie within one standard deviation from the population mean, 95 percent will lie within two standard deviations and so on. In other words we can say "It all has to do with the distribution of our population. This theorem allows you to simplify problems in statistics by allowing you to work with a distribution that is approximately normal." 




The CLT is not limited to making inferences from a sample about a population. There are four kinds of inferences we can make based on the CLT

1. We have the information of a valid sample. We can make accurate assumptions about it’s population.
2. We have the information of the population. We can make accurate assumptions about a valid sample from that population.
3. We have the information of a population and a valid sample. We can accurately infer if the sample was drawn from that population.
4. We have the information about two different valid samples. We can accurately infer if the two samples where drawn from the same population.

Condtions for Central Limit Theorem:

Independence.
>> The sampled obervsations must be independent
>> random sampling should be done.
>> if sampling without replacement, the sample should be less than 10% of the population.

Sample skew
>> The population distribution should be normal
>> But if the distribution is skewed, the sample must be large (greater than 30)

Important Points to remember :

The central limit theorem (CLT) states that the distribution of sample means approximates a normal distribution as the sample size gets larger.

Sample sizes equal to or greater than 30 are considered sufficient for the CLT to hold.

A key aspect of CLT is that the average of the sample means and standard deviations will equal the population mean and standard deviation.

A sufficiently large sample size can predict the characteristics of a population accurately.

Comments

Popular posts from this blog

what data scientist spend the most time doing

Generally we think of data scientists building algorithms,exploring data and doing predictive analysis. That's actually not what they spend most of their time doing however , we can see in the in the graph most of the time Data scientist are involved in data cleaning part , as in real world scenario we are mostly getting the data which is messey, we can feed the data after cleaning , ML model will not work if the data is messey, Data cleaning is very very important so mostly data analyst and data scientists are involved in this task. 60 percent: Cleaning and organising Data According to a study, which surveyed 16,000 data professionals across the world, the challenge of dirty data is the biggest roadblock for a data scientist. Often data scientists spend a considerable time formatting, cleaning, and sometimes sampling the data, which will consume a majority of their time.Hence, a data scientist, the need for you to ensure that you have access to clean and structured data can save y...

Most Used Algorithm by DataScientist

We will discuss mostly machine learning algorithms that are important for data scientists and classify them based on supervised and unsupervised roles. I will provide you an outline for all the important algorithms that you can deploy for improving your data science operations. Here is the list of top Data Science Algorithms that you must know to become a data scientist. Let’s start with the first one – 1. Linear Regression Linear Regression is a method of  measuring the relationship between two continuous variables . The two variables are – Independent Variable – “x” Dependent Variable – “y” In the case of a simple linear regression, the independent value is the predictor value and it is only one. The relationship between x and y can be described as: y = mx + c Where m is the slope and c is the intercept. Based on the predicted output and the actual output, we perform the calculation 2. Logistic Regression Logistic Regression is used for binary classificat...