Nearly all big tech companies have an artificial intelligence project, and they are willing to pay experts millions of dollars to help get it done. – By CADE METZ
Machine learning is a part of artificial intelligence. According to IBM’s forecast, job opening for artificial intelligence, machine learning and data science will increase 28% by 2020 (Forbes).
So if you are looking for a machine learning job or need to prepare for machine learning interview, then take a look at following questionaries.
What is machine learning?
Machine learning is a branch of Artificial Intelligence. It allows systems to automatically learn and improve from experience without being explicitly programmed.
What is artificial intelligence?
Artificial Intelligence is a branch of Computer Science that studies and researches to develop machines that have intelligence like human being. Most importantly, they can learn from experience and deal with new situations smartly.
What is the difference between artificial intelligence and machine learning?
Artificial Intelligence (AI) has many branches. One of them is ML. AI deals with broader context of developing a machine that can act like human and smartly. On the other hand, in machine learning we provide data to machines and they learn for themselves from that data.
What are the types of machine learning?
There are 3 types of machine learning. 1. Supervised learning, 2. Unsupervised learning and 3. Reinforced learning
What is Supervised machine learning?
In supervised machine learning, you provide a set of data with problems and answers. Machine learns from that set of data and applies learning in future.
What is Unsupervised machine learning?
In unsupervised learning, we don’t provide any solution data to machine. We provide them a set of data. The machine learns for itself.
What is Reinforcement machine learning?
Reinforcement learning is training by rewards and punishments. Here we train a computer as if we train a dog. If the dog obeys and acts according to our instructions we encourage it by giving biscuits or we punish it (by not providing biscuit or any other mean). Similarly, if the system works well then the teacher gives positive value (i.e. reward) or the teacher gives negative value (i.e. punishment). The learning system which gets the punishment has to improve itself. Thus it is a trial and error process.
What are the algorithms used in machine learning?
1. Linear Regression,
2. Logistic Regression,
3. Decision Tree,
5. Naive Bayes,
8. Random Forest,
9. Dimensionality Reduction Algorithms,
10. Gradient Boosting algorithms,
Explain Linear Regression
Linear regression is a statistical method that attempts to model relationship between different scalar variables. There can be two or more variables. Among them, one is dependent variable. Others are independent variables.
What do you know about logistic regression?
Like all regression analyses, the logistic regression is a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables.
What is the difference between linear regression and correlation?
From correlation we can only get an index describing the linear relationship between two variables; in regression can predict the relationship between more than two variables and can use it to identify which variables x can predict the outcome variable y. … While regression means going back towards average .
When to use decision tree vs logistic regression?
A logistic regression model is searching for a single linear decision boundary in your feature space, whereas a decision tree is essentially partitioning your feature space into half-spaces using axis-aligned linear decision boundaries. The net effect is that you have a non-linear decision boundary, possibly more than one.
This is nice when your data points aren’t easily separated by a single hyperplane. On the other hand, decision trees are so flexible that it depends on your specific problem and the data you have. Both decision trees (depending on the implementation, e.g. C4.5) and logistic regression should be able to handle continuous and categorical data just fine. It can be prone to overfitting. To combat this, you can try pruning. Logistic regression tends to be less susceptible (but not immune!) to overfitting.
Lastly, another thing to consider is that decision trees can automatically take into account interactions between variables. For example xyxy if you have two independent features xx and yy. With logistic regression, you’ll have to manually add those interaction terms yourself.
Which algorithms do we use for supervised machine learning?
Classification Algorithms: 1. Support vector machines (SVM), 2. Neural networks, 3. Naïve Bayes classifier, 4. Decision trees, 5. Discriminant analysis, 6. Nearest neighbors (kNN); Regression Algorithms: 1. Linear regression, 2. Nonlinear regression, 3. Generalized linear models, 4. Decision trees, 5. Neural networks
Which algorithms do we use for unsupervised machine learning?
a. Clustering: k-means, mixture models, hierarchical clustering, b. Neural Networks: Hebbian Learning, Generative Adversarial Networks. c. Approaches for learning latent variable models: Expectation–maximization algorithm (EM) Method of moments.
How is KNN different from k-means clustering?
K-nearest neighbors is a classification algorithm, which is a subset of supervised learning. K-means is a clustering algorithm, which is a subset of unsupervised learning. … In sum, two different algorithms with two very different end results
What is ROC curve and how it works?
In statistics, a receiver operating characteristic curve, i.e. ROC curve, is a graphical plot. It illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision making.
What is semi supervised machine learning?
Semi-supervised machine learning is a mixure of supervised learning and unsupervised learning. Here, some data is labeled but most of it is unlabeled.
What is Ordinary Least Squares Regression?
In statistics, ordinary least squares (OLS) or linear least squares is a method for estimating the unknown parameters in a linear regression model. The goal of it is to minimizing the sum of the squares of the differences between the observed responses (values of the variable being predicted) in the given dataset and those predicted by a linear function of a set of explanatory variables.
Briefly describe Naïve Bayes Classification
Naive Bayes is a collection of classification algorithms based on Bayes Theorem. It is not a single algorithm but a family of algorithms that all share a common principle, that every feature being classified is independent of the value of any other feature. So for example, a fruit may be considered to be an apple if it is red, round, and about 3″ in diameter. A Naive Bayes classifier considers each of these “features” (red, round, 3” in diameter) to contribute independently to the probability that the fruit is an apple, regardless of any correlations between features. Features, however, aren’t always independent which is often seen as a shortcoming of the Naive Bayes algorithm and this is why it’s labeled “naive”.
Do you know the meaning of SVM?
“Support Vector Machine” (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges. However, it is mostly used in classification problems. In this algorithm, we plot each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate. Then, we perform classification by finding the hyper-plane that differentiate the two classes very well.