Answer: Processing a high dimensional data on a limited memory machine is a strenuous task, your interviewer would be fully aware of that. The following are the methods you can use to tackle.
such a situation:
Since we are having low RAM, we should close all other applications in our machine, including the web browser, so that most of the memory can be put to use.
We can randomly sample the data set. This means we can create a smaller data set, let’s say, having 1000 variables and 300000 rows and do the computations.
To reduce dimensionality, we can separate the numerical and categorical variables and remove the correlated variables. For numerical variables, we’ll use correlation. For categorical variables, we’ll use the chi-square test.
Also, we can use and pick the components which can explain the maximum variance in the data set.
Using online learning algorithms like Vowpal Wabbit (available in Python) is a possible option.
Building a linear model using Stochastic Gradient Descent is also helpful.
We can also apply our business understanding to estimate which all predictors can impact the response variable. But, this is an intuitive approach, failing to identify useful predictors might result in a significant loss of information.
Answer: Yes, rotation (orthogonal) is necessary because it maximizes the difference between variance captured by the component. This makes the components easier to interpret. Not to forget, that’s the motive of doing PCA where we aim to select fewer components (than features) which can explain the maximum variance in the data set. By doing rotation, the relative location of the components doesn’t change, it only changes the actual coordinates of the points.
If we don’t rotate the components, the effect of PCA will diminish and we’ll have to select the number of components to explain variance in the data set.
Answer: This question has enough hints for you to start thinking! Since the data is spread across the median, let’s assume it’s a normal distribution. We know, in a normal distribution, ~68% of the data lies in 1 standard deviation from mean (or mode, median), which leaves ~32% of the data unaffected. Therefore, ~32% of the data would remain unaffected by missing values.
Answer: If you have worked on enough data sets, you should deduce that cancer detection results in imbalanced data. In an imbalanced data set, accuracy should not be used as a measure of performance because 96% (as given) might only be predicting majority class correctly, but our class of interest is minority class (4%) which is the people who actually got diagnosed with cancer. Hence, in order to evaluate model performance, we should use Sensitivity (True Positive Rate), Specificity (True Negative Rate), F measure to determine the class-wise performance of the classifier. If the minority class performance is found to be poor, we can undertake the following steps:
We can use undersampling, oversampling or SMOTE to make the data balanced.
We can alter the prediction threshold value by doing and finding an optimal threshold using the AUC-ROC curve.
We can assign a weight to classes such that the minority classes get larger weight.
We can also use anomaly detection.
Answer: naive Bayes is so ‘naive’ because it assumes that all of the features in a data set are equally important and independent. As we know, these assumptions are rarely true in a real-world scenario.
Answer: Prior probability is nothing but, the proportion of dependent (binary) variable in the data set. It is the closest guess you can make about a class, without any further information.
For example: In a data set, the dependent variable is binary (1 and 0). The proportion of 1 (spam) is 70% and 0 (not spam) is 30%. Hence, we can estimate that there are 70% chances that any new email would be classified as spam.
The likelihood is the probability of classifying a given observation as 1 in the presence of some other variable.
For example, the probability that the word ‘FREE’ is used in the previous spam message is a likelihood. The marginal likelihood is the probability that the word ‘FREE’ is used in any message.
Answer: Time series data is known to possess linearity. On the other hand, a decision tree algorithm is known to work best to detect non – linear interactions. The reason why the decision tree failed to provide robust predictions because it couldn’t map the linear relationship as good as a regression model did. Therefore, we learned that a linear regression model can provide robust prediction given the data set satisfies its linearity assumptions
Answer: You might have started hopping through the list of ML algorithms in your mind. But, wait! Such questions are asked to test your machine learning fundamentals. This is not a machine learning problem. This is a route optimization problem. A machine learning problem consists of three things:
1. There exist a pattern.
2. You cannot solve it mathematically (even by writing exponential equations).
3. You have data on it.
Always look for these three factors to decide if machine learning is a tool to solve a particular problem.
Answer: Low bias occurs when the model’s predicted values are near to actual values. In other words, the model becomes flexible enough to mimic the training data distribution. While it sounds like a great achievement, but not to forget, a flexible model has no generalization capabilities. It means, when this model is tested on unseen data, it gives disappointing results.
In such situations, we can use the bagging algorithm (like random forest) to tackle high variance problems. Bagging algorithms divide a data set into subsets made with repeated randomized sampling. Then, these samples are used to generate a set of models using a single learning algorithm. Later, the model predictions are combined using voting (classification) or averaging (regression).
Also, to combat high variance, we can:
Use the regularization techniques, where higher model coefficients get penalized, hence lowering model complexity.
Use top n features from the variable importance chart. Maybe, with all the variables in the data set, the algorithm is having difficulty in finding a meaningful signal.
Answer: Chances are, you might be tempted to say No, but that would be incorrect. Discarding correlated variables have a substantial effect on PCA because, in the presence of correlated variables, the variance explained by a particular component gets inflated.
For example, You have 3 variables in a data set, of which 2 are correlated. If you run PCA on this data set, the first principal component would exhibit twice the variance than it would exhibit with uncorrelated variables. Also, adding correlated variables lets PCA put more importance on those variables, which is misleading.
Answer: As we know, ensemble learners are based on the idea of combining weak learners to create strong learners. But, these learners provide superior results when the combined models are uncorrelated. Since we have used 5 GBM models and got no accuracy improvement, it suggests that the models are correlated. The problem with correlated models is, all the models provide the same information
For example: If model 1 has classified User1122 as 1, there are high chances model 2 and model 3 would have done the same, even if its actual value is 0. Therefore, ensemble learners are built over the premise of combining weak uncorrelated models to obtain better predictions.
Answer: Don’t get mislead by ‘k’ in their names. You should know that the fundamental difference between both these algorithms is, kmeans is unsupervised in nature and kNN is supervised in nature. kmeans is a clustering algorithm. kNN is a classification (or regression) algorithm.
kmeans algorithm partitions a data set into clusters such that a cluster formed is homogeneous and the points in each cluster are close to each other. The algorithm tries to maintain enough separability between these clusters. Due to unsupervised nature, the clusters have no labels. kNN algorithm tries to classify an unlabeled observation based on its k (can be any number ) surrounding neighbors. It is also known as a lazy learner because it involves minimal training of the model. Hence, it doesn’t use training data to make a generalization on the unseen data sets.
Answer: True Positive Rate = Recall. Yes, they are equal having the formula (TP/TP + FN).
Answer: Yes, it is possible. We need to understand the significance of the intercept term in a regression model. The intercept term is showing model prediction without any independent variable i.e. mean prediction.
The formula of
R² = 1 – Σ(y – y´)²/Σ(y – ymean)²
where y´ is predicted value.
When the intercept term is present, the R² value evaluates your model wrt. to the mean model. In absence of intercept term ( ymean), the model can make no such evaluation, with large denominator, Σ(y – y´)²/Σ(y)² equation’s value becomes smaller than actual, resulting in higher R².
Answer: To check multicollinearity, we can create a correlation matrix to identify & remove variables having a correlation above 75% (deciding a threshold is subjective). In addition, we can use calculate VIF (variance inflation factor) to check the presence of multicollinearity.
VIF value<= 4 suggests no multicollinearity whereas a value of >= 10 implies serious multicollinearity. Also, we can use tolerance as an indicator of multicollinearity.
But, removing correlated variables might lead to loss of information. In order to retain those variables, we can use penalized regression models like ridge or lasso regression. Also, we can add some random noise in the correlated variables so that the variables become different from each other. But, adding noise might affect the prediction accuracy, hence this approach should
be carefully used.
Answer: You can quote ISLR’s authors Hastie, Tibshirani who asserted that, in the presence of few variables with medium / large sized effect, use lasso regression. In presence of many variables with small/medium-sized effects, use ridge regression.
Conceptually, we can say, lasso regression (L1) does both variable selection and parameter shrinkage, whereas Ridge regression only does parameter shrinkage and end up including all the coefficients in the model. In the presence of correlated variables, ridge regression might be the preferred choice. Also, ridge regression works best in situations where the east square estimates have higher variance. Therefore, it depends on our model objective.
Answer: After reading this question, you should have understood that this is a classic case of “causation and correlation”. No, we can’t conclude that the decrease in the number of pirates caused climate change because there might be other factors (lurking or confounding variables) influencing this phenomenon. Therefore, there might be a correlation between global average temperature and number of pirates, but based on this information we can’t say that pirated died because of the rise in global
average temperature.
Answer:
Following are the methods of variable selection you can use:
1. Remove the correlated variables prior to selecting important variables
2. Use linear regression and select variables based on p values
3. Use Forward Selection, Backward Selection, Stepwise Selection
4. Use Random Forest, Xgboost and plot variable importance chart
5. Use Lasso Regression
6. Measure information gain for the available set of features and select top n features accordingly.
Answer:
Correlation is the standardized form of covariance.
Covariances are difficult to compare. For example: if we calculate the covariances of salary ($) and age (years), we’ll get different covariances that can’t be compared because of having unequal scales. To combat such a situation, we calculate correlation to get a value between -1 and 1, irrespective of their respective scale.
Answer:
Yes, we can use ANCOVA (analysis of covariance) technique to capture the association between continuous and categorical variables.
Answer:
The fundamental difference is, random forest uses bagging techniques to make predictions. GBM uses boosting techniques to make predictions.
In the bagging technique, a data set is divided into n samples using randomized sampling.
Then, using a single learning algorithm a model is built on all samples. Later, the resultant predictions are combined using voting or averaging. Bagging is done in parallel. In boosting, after the first round of predictions, the algorithm weighs misclassified predictions higher, such that they can be corrected in the succeeding round. This sequential process of giving higher weights to misclassified predictions continues until a stopping criterion is reached.
Random forest improves model accuracy by reducing variance (mainly). The trees grown are uncorrelated to maximize the decrease in variance. On the other hand, GBM improves accuracy my reducing both bias and variance in a model
Answer:
A classification tree makes the decision based on the Gini Index and Node Entropy. In simple words, the tree algorithm finds the best possible feature which can divide the data set into purest possible children nodes.
Gini index says, if we select two items from a population at random then they must be of the same
class and the probability for this is 1 if the population is pure. We can calculate Gini as following:
1. Calculate Gini for sub-nodes, using the formula sum of the square of probability for success and failure
(p^2+q^2).
2. Calculate Gini for split using weighted Gini score of each node of that split
Entropy is the measure of impurity as given by (for binary class):
Here p and q is the probability of success and failure respectively in that node. Entropy is zero when a node is homogeneous. It is maximum when both the classes are present in a node at 50% – 50%. Lower entropy is desirable.
Answer:
The model has overfitted. Training error 0.00 means the classifier has minimized the training data patterns to an extent, that they are not available in the unseen data. Hence, when this classifier was run on an unseen sample, it couldn’t find those patterns and returned a prediction with higher error. In a random forest, it happens when we use a larger number of trees than necessary. Hence, to avoid these situations, we should tune the number of trees using cross-validation.
Answer: In such high dimensional data sets, we can’t use classical regression techniques, since their assumptions tend to fail. When p > n, we can no longer calculate a unique least-square coefficient estimate, the variances become infinite, so OLS cannot be used at all.
To combat this situation, we can use penalized regression methods like lasso, LARS, ridge which can shrink the coefficients to reduce variance. Precisely, ridge regression works best in situations where the least square estimates have higher variance.
Among other methods include subset regression, forward stepwise regression.
Answer: In the case of linearly separable data, the convex hull represents the outer boundaries of the two groups of data points. Once the convex hull is created, we get maximum margin hyperplane (MMH) as a perpendicular bisector between two
convex hulls.
MMH is the line which attempts to create the greatest
the separation between two groups.
Answer:
Don’t get baffled at this question. It’s a simple question asking the difference between the two.
Using one hot encoding, the dimensionality (a.k.a features) in a data set get increased because it creates a new variable for each level present in categorical variables. For example: let’s say we have a variable ‘color’. The variable has 3 levels namely Red, Blue, and Green. One hot encoding ‘color’ variable will generate three new variables as Color. Red, Color.Blue and Color.Green
containing 0 and 1 value.
In label encoding, the levels of categorical variables get encoded as 0 and 1, so no new variable is created. Label encoding is majorly used for binary variables.
Answer:
Neither. In time series problem, k fold can be troublesome because there might be some pattern in year 4 or 5 which is not in year 3. Resampling the data set will separate these trends, and we might end up validation in past years, which is incorrect. Instead, we can use forward chaining
strategy with 5 fold as shown below:
fold 1: training [1], test [2]
fold 2: training [1 2], test [3]
fold 3: training [1 2 3], test [4]
fold 4: training [1 2 3 4], test [5]
fold 5: training [1 2 3 4 5], test [6]
where 1,2,3,4,5,6 represents “year”.
Answer:
We can deal with them in the following ways:
1. Assign a unique category to miss values, who knows the missing values might decipher some trend
2. We can remove them blatantly.
3. Or, we can sensibly check their distribution with the target variable, and if found any pattern we’ll keep those missing values and assign them a new category while removing others.
Answer: The basic idea for this kind of recommendation engine comes from a collaborative filtering algorithm that considers “User Behavior” for recommending items. They exploit the behavior of other users and items in terms of transaction history, ratings, selection, and purchase information. Other user’s behavior and preferences over the items are used to recommend items to the new users. In this case, features of the items are not known.
Answer:
Type I error is committed when the null hypothesis is true and we reject it, also known as a ‘False Positive’. Type II error is committed when the null hypothesis is false and we accept it, also known as ‘False Negative’. In the context of the confusion matrix, we can say Type I error occurs when we classify a value as positive (1) when it is actually negative (0). Type II error occurs when we classify a value as negative (0) when it is actually positive(1).
Answer:
In the case of classification problems, we should always use stratified sampling instead of random sampling. A random sampling doesn’t take into consideration the proportion of target classes. On the contrary, stratified sampling helps to maintain the distribution of the target variables in the resultant distributed samples also.
Answer:
Tolerance (1 / VIF) is used as an indicator of multicollinearity. It is an indicator of
percent of the variance in a predictor that cannot be accounted for by other predictors. Large values of tolerance are desirable.
We will consider adjusted R² as opposed to R² to evaluate model fit because of R² increases irrespective of improvement in prediction accuracy as we add more variables. But, adjusted R² would only increase if an additional variable improves the accuracy of the model, otherwise, it stays the same. It is difficult to commit a general threshold value for adjusted R² because it varies between data sets.
For example, a gene mutation data set might result in lower adjusted R² and still provide fairly good predictions, as compared to a stock market data where lower adjusted R² implies that the model is not good.
Answer:
We don’t use manhattan distance because it calculates distance horizontally or
vertically only. It has dimension restrictions. On the other hand, euclidean metric can be used in any space to calculate distance. Since the data points can be present in any dimension, euclidean distance is a more viable option.
Example: Think of a chess board, the movement made by a bishop or a rook is calculated by manhattan distance because of their respective vertical & horizontal movements
Answer:
It’s simple. It’s just like how babies learn to walk. Every time they fall down, they learn (unconsciously) & realize that their legs should be straight and not in a bend position. The next time they fall down, they feel pain. They cry. But, they learn ‘not to stand like that again’. In order to avoid that pain, they try harder. To succeed, they even seek support from the door or wall or anything near them, which helps them stand firm.
This is how a machine works & develops intuition from its environment.
Note: The interview is only trying to test if have the ability to explain complex concepts in simple
terms.
Answer:
We can use the following methods:
1. Since logistic regression is used to predict probabilities, we can use the AUC-ROC curve along with the confusion matrix to determine its performance.
2. Also, the analogous metric of adjusted R² in logistic regression is AIC. AIC is the measure of fit which penalizes the model for the number of model coefficients. Therefore, we always prefer the model with minimum AIC value.
3. Null Deviance indicates the response predicted by a model with nothing but an intercept.
Lower the value, better the model. Residual deviance indicates the response predicted by a model on adding independent variables. Lower the value, better the model.
Answer:
You should say, the choice of machine learning algorithm solely depends on the type of data. If you are given a data set which is exhibits linearity, then linear regression would be the best algorithm to use. If you have given to work on images, audios, then the neural networks would help you to build a robust model.
If the data comprises of nonlinear interactions, then a boosting or bagging algorithm should be the choice. If the business requirement is to build a model that can be deployed, then we’ll use regression or a decision tree model (easy to interpret and explain) instead of black-box algorithms like SVM, GBM, etc.
In short, there is no one master algorithm for all situations. We must be scrupulous enough to understand which algorithm to use.
Answer:
For better predictions, the categorical variable can be considered as a continuous variable only when the variable is ordinal in nature.
Answer:
Regularization becomes necessary when the model begins to overfit/underfit. This technique introduces a cost term for bringing in more features with the objective function.
Hence, it tries to push the coefficients for many variables to zero and hence reduce the cost term.
This helps to reduce model complexity so that the model can become better at predicting (generalizing).
Answer:
The error emerging from any model can be broken down into three components mathematically.
Following are these component:
Bias error is useful to quantify how much on an average are the predicted values different from the actual value. A high bias error means we have an under-performing model that keeps on missing important trends. Variance on the other side quantifies how are the prediction made on the same observation different from each other. A high variance model will over-fit on your training population and perform badly on any observation beyond training.
Answer:
OLS and Maximum likelihood are the methods used by the respective regression methods to approximate the unknown parameter (coefficient) value. In simple words, Ordinary least square(OLS) is a method used in linear regression which approximates the parameters resulting in minimum distance between actual and predicted values. Maximum Likelihood helps in choosing the values of parameters which maximizes the likelihood that the parameters are most likely to produce observed data.
Ans: what’s Wrong with ARIMA
Autoregressive Integrated Moving Average, or ARIMA, is a forecasting method for univariate time series data.
As its name suggests, it supports both autoregressive and moving average elements. The integrated element refers to differencing allowing the method to support time-series data with a trend.
A problem with ARIMA is that it does not support seasonal data. That is a time series with a repeating cycle.
ARIMA expects data that is either not seasonal or has the seasonal component removed, e.g. seasonally adjusted via methods such as seasonal differencing.
The parameters of the ARIMA model are defined as follows:
•p: The number of lag observations included in the model, also called the lag order.
•d: The number of times that the raw observations are differenced also called the degree of difference.
•q: The size of the moving average window, also called the order of moving average.
Ans.
Akaike information criterion (AIC) (Akaike, 1974) is a fined technique based on in-sample fit to estimate the likelihood of a model to predict/estimate the future values.
A good model is the one that has minimum AIC among all the other models. The AIC can be used to select between the additive and multiplicative Holt-Winters models.
Bayesian information criterion (BIC) (Stone, 1979) is another criteria for model selection that measures the trade-off between model fit and complexity of the model. A lower AIC or BIC value indicates a better fit.
AIC and BIC are both penalized-likelihood criteria. Both are of the form “measure of fit + complexity penalty”:
AIC = -2*ln(likelihood) + 2*p, and BIC = -2*ln(likelihood) + ln(N)*p,
where p = number of estimated parameters, N = sample size
•AIC is best for prediction as it is asymptotically equivalent to cross-validation.
•BIC is best for an explanation as it allows consistent estimation of the underlying data generating process
AIC is equivalent to K-fold cross-validation, BIC is equivalent to leve-one-out cross-validation.
Ans. In Machine Learning, performance measurement is an essential task. So when it comes to a classification problem, we can count on an AUC – ROC Curve.
When we need to check or visualize the performance of the multi – class classification problem, we use AUC (Area Under The Curve) ROC (Receiver Operating Characteristics) curve. It is one of the most important evaluation metrics for checking any classification model’s performance. It is also written as AUROC (Area Under the Receiver Operating Characteristics)
The ROC curve is plotted with TPR against the FPR where TPR is on y-axis and FPR is on the x-axis.
Well, it is a performance measurement for machine learning classification problem where output can be two or more classes. It is a table with 4 different combinations of predicted and actual values.
It is extremely useful for measuring Recall, Precision, Specificity, Accuracy and most importantly AUC-ROC Curve.
Ans. Naive Bayes performs well when we have multiple classes and working with text classification. Advantage of Naive Bayes algorithms are:
It is simple and if the conditional independence assumption actually holds, a Naive Bayes classifier will converge quicker than discriminative models like logistic regression, so you need less training data. And even if the NB assumption doesn’t hold.
It requires less model training time.
The main difference between Naive Bayes(NB) and Random Forest (RF) is their model size. Naive Bayes model size is low and quite constant with respect to the data. The NB models cannot represent complex behavior so they won’t get into overfitting. On the other hand, the Random Forest model size is very large and if not carefully built, it results in overfitting. So, when your data is dynamic and keeps changing. NB can adapt quickly to the changes and new data while using an RF you would have to rebuild the forest every time something changes.
from sklearn.naive_bayes import GaussianNB
K-nearest neighbors algorithm (k-NN) is a supervised method used for classification and regression problems. However, it is widely used in classification problems. It makes predictions by learning from the past available data.
Supervised Technique
Used for Classification or Regression
Used for classification and regression of known data where usually the target attribute/variable is known beforehand.
KNN needs labeled points
K- Means clustering is used for analyzing and grouping data which does not include pre-labeled class or even a class attribute at all.
Unsupervised Technique
Used for Clustering
Used for scenarios like understanding the population demographics, social media trends, anomaly detection, etc.
K-Means doesn’t require labeled points
In unsupervised learning, the data is not labeled so consider the unlabelled data. Our task is to group the data into two clusters.
This is our data; the first thing we can do is to randomly initialize two points, called the cluster centroids.
In k-means we do two things. First is a cluster assignment step and second is a move centroid step.
In the first step, the algorithm goes to each of the data points and divides the points into respective classes, depending on whether it is closer to the red cluster centroid or green cluster centroid.
In the second step, we move the centroid step. We compute the mean of all the red points and move the red cluster centroid there. We do the same thing for the green cluster.
This is an iterative step so we do the above step till the cluster centroid will not move any further and the colors of the point will not change any further.
KNN is a supervised learning algorithm which means training data is labeled. Consider the task of classifying a green circle between class 1 and class 2.
If we choose k=1, then the green circle will go into class 1 as it is closer to class 1. If K=3, then there are ‘two’ class 2 objects and ‘one’ class one object. So KNN will classify the green circle in class 2 as it forms the majority.
Avoid overfitting.
Cross-Validation: A standard way to find out-of-sample prediction error is to use 5-fold cross-validation.
Early Stopping: Its rules provide us the guidance 5as to how many iterations can be run before the learner begins to over-fit.
Pruning: Pruning is extensively used while building-related models. It simply removes the nodes which add little predictive power for the problem in hand.
Regularization: It introduces a cost term for bringing in more features with the objective function. Hence it tries to push the coefficients for many variables to zero and hence reduce cost term
Ans. GBM and RF both are ensemble learning methods and predict (regression or classification)
RFs train each tree independently, using a random sample of the data. This randomness helps to make the model more robust than a single decision tree, and less likely to overfit on the training data
RF is much easier to tune than GBM. There are typically two parameters in RF: number of trees and number of features to be selected at each node.
RF is harder to overfit than GBM.
The main limitation of the Random Forests algorithm is that a large number of trees may make the algorithm slow for real-time prediction.