When you start to learn linear regression and Logistic regression you will come up with an equation. The equation will contain theta. You might be wondering what does this theta in machine learning mean.
A few days ago I was in my EMBA class “Managing Operations”. our class topic was forcasting. In that class I learned something about weights. Later I was able to relate between weights and theta. Today I will explain what is theta in terms of weights.
Suppose you have an online store where you sell expensive, unique and stylus pens. In last year you advertised on Google AdWords, Facebook, Twitter and a local newspaper. From last year’s data you have come to know that Google advertisement was more effective than all other advertisements. This year you have to make a plan for advertising on different media. To get a better result what will you do?
Simple answer: You will spend more money on Google advertisement. From your previous year’s data you have found that Facebook is in second position. So your second priority will be Facebook advertisement. You have advertised on your local newspaper too. However, you couldn’t figure out how much traffic did you get from that advertisement. So this year you are not going to advertise on local newspaper at all.
Let’s assume, your this year’s total advertisement budget is US$ 6000. We can write down your this year’s advertisement plan as follows:
$3000 Google advertisement + $2000 Facebook advertisement + $1000 twitter advertisement + $0 Local newspaper advertisement = $6000.
From above calculation we can say that we are spending most of the money in Google advertisement. Alternatively, we can say most weight goes to Google advertisement. So by using weights you give different priority to different media.
Now you understood what does it mean by weight. If so then you also understand the meaning of theta in machine learning. In machine learning algorithms we use theta in different features to give them different priority or weight based on their importance.
House price example
When we do linear regression and Logistic regression we use some features. For example, suppose we are going to find out house price of any specific area. For that we take some historical value or previous selling price data. From that data we take different features of house. e.g. number of rooms, number of bathrooms, kitchens and whether the house is beside the main road or far from the main road.
For calculating house price we give more priority to some features and less priority to some other features. For example, number of rooms have more weight or priority than number of kitchens. If there are 5 kitchens in a 2000 square feet home, that may not add more value to the house. Because only one or two kitchens are enough. That’s why number of kitchens are not major issues in predicting house price. So we give less weight to kitchen and more weight to rooms as well as total area of the house.
So now if you see a machine learning algorithm with theta, you will be able to figure out what does it mean.
Reinforcement learning is one type of Machine learning. In a single sentence, in this learning process a machine learns using trial and error method. Here basically, we give the machine 2 instructions.
1. Try all possible ways.
2. From your experience avoid errors and increase success rate.
Suppose, we have a robot. There is a fire in front of it. The robot can do 2 things. Whether it can directly jump into the fire or run away from it.
At first it will try both ways. Jump into fire and fail. Then again it will run away and survive. The robot will remember it. Next time when it see the fire again, it will run away. This is the basic concept of reinforcement learning.
Reinforcement learning Algorithms:
Relative value learning (R-Learning)
Where to apply:
There are many fields where we can apply it. Some examples are as follows:
Playing a game: Reinforcement learning can learn to play different games and can become master on it. One great example is “AlphaGo system”. Using this machine learning the system beat a high ranked Go player.
Natural language processing: Processing human language is very difficult task. By using it we are overcoming this issue.
Self driving car system: In the near future, we’ll see lot of self driving cars on the road. To make it come true reinforcement learning is contributing a lot. ML algorithms (e.g. Deep Q-Learning algorithm) are used in self driving car system to improve driving.
Robot’s movement: Robot’s different movements are improved over time by using reinforcement learning. For example, robot can grab an object more accurately by using this algorithm.
Nearly all big tech companies have an artificial intelligence project, and they are willing to pay experts millions of dollars to help get it done. – By CADE METZ
Machine learning is a part of artificial intelligence. According to IBM’s forecast, job opening for artificial intelligence, machine learning and data science will increase 28% by 2020 (Forbes).
So if you are looking for a machine learning job or need to prepare for machine learning interview, then take a look at following questionaries.
What is machine learning?
Machine learning is a branch of Artificial Intelligence. It allows systems to automatically learn and improve from experience without being explicitly programmed.
What is artificial intelligence?
Artificial Intelligence is a branch of Computer Science that studies and researches to develop machines that have intelligence like human being. Most importantly, they can learn from experience and deal with new situations smartly.
What is the difference between artificial intelligence and machine learning?
Artificial Intelligence (AI) has many branches. One of them is ML. AI deals with broader context of developing a machine that can act like human and smartly. On the other hand, in machine learning we provide data to machines and they learn for themselves from that data.
What are the types of machine learning?
There are 3 types of machine learning. 1. Supervised learning, 2. Unsupervised learning and 3. Reinforced learning
What is Supervised machine learning?
In supervised machine learning, you provide a set of data with problems and answers. Machine learns from that set of data and applies learning in future.
What is Unsupervised machine learning?
In unsupervised learning, we don’t provide any solution data to machine. We provide them a set of data. The machine learns for itself.
What is Reinforcement machine learning?
Reinforcement learning is training by rewards and punishments. Here we train a computer as if we train a dog. If the dog obeys and acts according to our instructions we encourage it by giving biscuits or we punish it (by not providing biscuit or any other mean). Similarly, if the system works well then the teacher gives positive value (i.e. reward) or the teacher gives negative value (i.e. punishment). The learning system which gets the punishment has to improve itself. Thus it is a trial and error process.
Linear regression is a statistical method that attempts to model relationship between different scalar variables. There can be two or more variables. Among them, one is dependent variable. Others are independent variables.
What do you know about logistic regression?
Like all regression analyses, the logistic regression is a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables.
What is the difference between linear regression and correlation?
From correlation we can only get an index describing the linear relationship between two variables; in regression can predict the relationship between more than two variables and can use it to identify which variables x can predict the outcome variable y. … While regression means going back towards average .
A logistic regression model is searching for a single linear decision boundary in your feature space, whereas a decision tree is essentially partitioning your feature space into half-spaces using axis-aligned linear decision boundaries. The net effect is that you have a non-linear decision boundary, possibly more than one.
This is nice when your data points aren’t easily separated by a single hyperplane. On the other hand, decision trees are so flexible that it depends on your specific problem and the data you have. Both decision trees (depending on the implementation, e.g. C4.5) and logistic regression should be able to handle continuous and categorical data just fine. It can be prone to overfitting. To combat this, you can try pruning. Logistic regression tends to be less susceptible (but not immune!) to overfitting.
Lastly, another thing to consider is that decision trees can automatically take into account interactions between variables. For example xyxy if you have two independent features xx and yy. With logistic regression, you’ll have to manually add those interaction terms yourself.
Which algorithms do we use for supervised machine learning?
K-nearest neighbors is a classification algorithm, which is a subset of supervised learning. K-means is a clustering algorithm, which is a subset of unsupervised learning. … In sum, they are two different algorithms with two very different end results
In statistics, a receiver operating characteristic curve, i.e. ROC curve, is a graphical plot. It illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision making.
In statistics, ordinary least squares (OLS) or linear least squares is a method for estimating the unknown parameters in a linear regression model. The goal of it is to minimizing the sum of the squares of the differences between the observed responses (values of the variable being predicted) in the given dataset and those predicted by a linear function of a set of explanatory variables.
Naive Bayes is a collection of classification algorithms based on Bayes Theorem. It is not a single algorithm but a family of algorithms that all share a common principle, that every feature being classified is independent of the value of any other feature. So for example, a fruit may be considered to be an apple if it is red, round, and about 3″ in diameter. A Naive Bayes classifier considers each of these “features” (red, round, 3” in diameter) to contribute independently to the probability that the fruit is an apple, regardless of any correlations between features. Features, however, aren’t always independent which is often seen as a shortcoming of the Naive Bayes algorithm and this is why it’s labeled “naive”.
“Support Vector Machine” (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges. However, it is mostly used in classification problems. In this algorithm, we plot each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate. Then, we perform classification by finding the hyper-plane that differentiate the two classes very well.