Skip to main content

Posts

Showing posts from February, 2020
Myth about Data Science - A must know for all Data Science enthusiast 1. Only Coder /Programmer can only become a Data Science No, its not correct. People who is having Basic Programming skills like Python/R or atleast who can learn basic programming can come in to this field.Here i wanted to suggest people who is having Engineering background /Software they can choose Python as a programming and The person who wanted to transit their career in to data science field but coming from non Engineering background like Arts,Commerce,Science they can prefer R as a Programming language . Here am not saying for non technical background can not learn python , its bit difficult to understand the basic and algorithm but if they are ready to learn no issues, they can take any of these either Python or R., I have Mentioned while choosing any of these which one is good according to me in another article i.e python, you can refer my article to get better understanding. 2. Data Scientist are ma

20 Must know Data Science Interview Questions by kdnuggets

The Most important questions which is generally asked by the technical panel : 1. Explain what regularization is and why it is useful. 2. Which data scientists do you admire most? which startups? 3. How would you validate a model you created to generate a predictive model of a quantitative outcome variable using multiple regression. 4. Explain what precision and recall are. How do they relate to the ROC curve? 5. How can you prove that one improvement you've brought to an algorithm is really an improvement over not doing anything? 6. What is root cause analysis? 7. Are you familiar with pricing optimization, price elasticity, inventory management, competitive intelligence? Give examples. 8. What is statistical power? 9. Explain what resampling methods are and why they are useful. Also explain their limitations. 10. Is it better to have too many false positives, or too many false negatives? Explain. 11. What is selection bias, why is it important and how can you avoid i

Why Central Limit Theorem is Important for evey Data Scientist?

The Central Limit Theorem is at the core of what every data scientist does daily: make statistical inferences about data. The theorem gives us the ability to quantify the likelihood that our sample will deviate from the population without having to take any new sample to compare it with. We don’t need the characteristics about the whole population to understand the likelihood of our sample being representative of it. The concepts of confidence interval and hypothesis testing are based on the CLT. By knowing that our sample mean will fit somewhere in a normal distribution, we know that 68 percent of the observations lie within one standard deviation from the population mean, 95 percent will lie within two standard deviations and so on. In other words we can say " It all has to do with the distribution of our population. This theorem allows you to simplify problems in statistics by allowing you to work with a distribution that is approximately normal."  The CLT is

Ensemble Methods detailed explanation

One of the major tasks of machine learning algorithms is to construct a fair model from a dataset. The process of generating models from data is called learning or training and the learned model can be called as hypothesis or learner. The learning algorithms which construct a set of classifiers and then classify new data points by taking a choice of their predictions are known as Ensemble methods.In Other words we can say " Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model." Why Use Ensemble Methods?? The learning algorithms which output only a single hypothesis tends to suffer from basically three issues. These issues are the statistical problem, the computational problem and the representation problem which can be partly overcome by applying ensemble methods.The learning algorithm which suffers from the statistical problem is said to have high variance. The algorithm which exhibits the co

Goals of ML Problem ?

The goal of any machine learning problem is to find a single model that will best predict our wanted outcome. Rather than making one model and hoping this model is the best/most accurate predictor we can make, ensemble methods take a myriad of models into account, and average those models to produce one final model. It is important to note that Decision Trees are not the only form of ensemble methods, just the most popular and relevant in DataScience today.

Data Science Interview Questions -Part 2

1) What are the differences between supervised and unsupervised learning? Supervised Learning Unsupervised Learning Uses known and labeled data as input Supervised learning has a feedback mechanism  Most commonly used supervised learning algorithms are decision trees, logistic regression, and support vector machine Uses unlabeled data as input Unsupervised learning has no feedback mechanism  Most commonly used unsupervised learning algorithms are k-means clustering, hierarchical clustering, and apriori algorithm 2) How is logistic regression done? Logistic regression measures the relationship between the dependent variable (our label of what we want to predict) and one or more independent variables (our features) by estimating probability using its underlying logistic function (sigmoid). The image shown below depicts how logistic regression works: The formula and graph for the sigmoid function is as shown: 3) Explain the steps in making a decision tree