Finding a logistic regression model which can achieve zero error on a training set training data for a binary...
$begingroup$
Not sure where to begin with this question, can anyone help out?
machine-learning self-study mathematical-statistics
$endgroup$
add a comment |
$begingroup$
Not sure where to begin with this question, can anyone help out?
machine-learning self-study mathematical-statistics
$endgroup$
1
$begingroup$
Is this from a book? If so can you share the title?
$endgroup$
– grayQuant
Feb 27 at 20:09
add a comment |
$begingroup$
Not sure where to begin with this question, can anyone help out?
machine-learning self-study mathematical-statistics
$endgroup$
Not sure where to begin with this question, can anyone help out?
machine-learning self-study mathematical-statistics
machine-learning self-study mathematical-statistics
edited Feb 27 at 18:08
Bryan Krause
742212
742212
asked Feb 27 at 17:39
user239276
1
$begingroup$
Is this from a book? If so can you share the title?
$endgroup$
– grayQuant
Feb 27 at 20:09
add a comment |
1
$begingroup$
Is this from a book? If so can you share the title?
$endgroup$
– grayQuant
Feb 27 at 20:09
1
1
$begingroup$
Is this from a book? If so can you share the title?
$endgroup$
– grayQuant
Feb 27 at 20:09
$begingroup$
Is this from a book? If so can you share the title?
$endgroup$
– grayQuant
Feb 27 at 20:09
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Logistic regression is a linear classifier, i.e. it draws a line (2D datasets) and classifies accordingly (one side is class 0, other side is class 1). So, if classes can be distinguished by a line (or hyperplane in higher dimensions), it is said that the dataset is linearly separable, though this dataset is not. One way to tackle this issue is creating new features, or applying transformations. For example, this dataset seems to be separable if you think radially, i.e. $R>alpha$, where $R$ is the radius, or distance to origin, which can be found by $R=sqrt{X_1^2+X_2^2}$. Constructing a logistic regression using this feature only, results in perfect classification.
$endgroup$
$begingroup$
By log-reg, do you mean a logistic regression model? Thanks for your help by the way!
$endgroup$
– user239276
Feb 27 at 17:53
$begingroup$
yes, sorry for ambiguity.
$endgroup$
– gunes
Feb 27 at 17:54
1
$begingroup$
@gunes This might be a bit too much of an answer for a self-study question, although I don't typically police those here and am not certain where exactly the community falls on these sorts of questions besides what is included in the tag info.
$endgroup$
– Bryan Krause
Feb 27 at 17:55
2
$begingroup$
(+1) It's worth noting that this is essentially using a very simple Radial Basis Network with logistic loss
$endgroup$
– Cliff AB
Feb 27 at 18:22
1
$begingroup$
It may be worth noting that this will cause the logistic regression to not converge! The parameter estimate for R will tend to infinity!
$endgroup$
– Matthew Drury
Feb 27 at 21:18
add a comment |
protected by gung♦ Feb 28 at 4:02
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Logistic regression is a linear classifier, i.e. it draws a line (2D datasets) and classifies accordingly (one side is class 0, other side is class 1). So, if classes can be distinguished by a line (or hyperplane in higher dimensions), it is said that the dataset is linearly separable, though this dataset is not. One way to tackle this issue is creating new features, or applying transformations. For example, this dataset seems to be separable if you think radially, i.e. $R>alpha$, where $R$ is the radius, or distance to origin, which can be found by $R=sqrt{X_1^2+X_2^2}$. Constructing a logistic regression using this feature only, results in perfect classification.
$endgroup$
$begingroup$
By log-reg, do you mean a logistic regression model? Thanks for your help by the way!
$endgroup$
– user239276
Feb 27 at 17:53
$begingroup$
yes, sorry for ambiguity.
$endgroup$
– gunes
Feb 27 at 17:54
1
$begingroup$
@gunes This might be a bit too much of an answer for a self-study question, although I don't typically police those here and am not certain where exactly the community falls on these sorts of questions besides what is included in the tag info.
$endgroup$
– Bryan Krause
Feb 27 at 17:55
2
$begingroup$
(+1) It's worth noting that this is essentially using a very simple Radial Basis Network with logistic loss
$endgroup$
– Cliff AB
Feb 27 at 18:22
1
$begingroup$
It may be worth noting that this will cause the logistic regression to not converge! The parameter estimate for R will tend to infinity!
$endgroup$
– Matthew Drury
Feb 27 at 21:18
add a comment |
$begingroup$
Logistic regression is a linear classifier, i.e. it draws a line (2D datasets) and classifies accordingly (one side is class 0, other side is class 1). So, if classes can be distinguished by a line (or hyperplane in higher dimensions), it is said that the dataset is linearly separable, though this dataset is not. One way to tackle this issue is creating new features, or applying transformations. For example, this dataset seems to be separable if you think radially, i.e. $R>alpha$, where $R$ is the radius, or distance to origin, which can be found by $R=sqrt{X_1^2+X_2^2}$. Constructing a logistic regression using this feature only, results in perfect classification.
$endgroup$
$begingroup$
By log-reg, do you mean a logistic regression model? Thanks for your help by the way!
$endgroup$
– user239276
Feb 27 at 17:53
$begingroup$
yes, sorry for ambiguity.
$endgroup$
– gunes
Feb 27 at 17:54
1
$begingroup$
@gunes This might be a bit too much of an answer for a self-study question, although I don't typically police those here and am not certain where exactly the community falls on these sorts of questions besides what is included in the tag info.
$endgroup$
– Bryan Krause
Feb 27 at 17:55
2
$begingroup$
(+1) It's worth noting that this is essentially using a very simple Radial Basis Network with logistic loss
$endgroup$
– Cliff AB
Feb 27 at 18:22
1
$begingroup$
It may be worth noting that this will cause the logistic regression to not converge! The parameter estimate for R will tend to infinity!
$endgroup$
– Matthew Drury
Feb 27 at 21:18
add a comment |
$begingroup$
Logistic regression is a linear classifier, i.e. it draws a line (2D datasets) and classifies accordingly (one side is class 0, other side is class 1). So, if classes can be distinguished by a line (or hyperplane in higher dimensions), it is said that the dataset is linearly separable, though this dataset is not. One way to tackle this issue is creating new features, or applying transformations. For example, this dataset seems to be separable if you think radially, i.e. $R>alpha$, where $R$ is the radius, or distance to origin, which can be found by $R=sqrt{X_1^2+X_2^2}$. Constructing a logistic regression using this feature only, results in perfect classification.
$endgroup$
Logistic regression is a linear classifier, i.e. it draws a line (2D datasets) and classifies accordingly (one side is class 0, other side is class 1). So, if classes can be distinguished by a line (or hyperplane in higher dimensions), it is said that the dataset is linearly separable, though this dataset is not. One way to tackle this issue is creating new features, or applying transformations. For example, this dataset seems to be separable if you think radially, i.e. $R>alpha$, where $R$ is the radius, or distance to origin, which can be found by $R=sqrt{X_1^2+X_2^2}$. Constructing a logistic regression using this feature only, results in perfect classification.
edited Feb 27 at 17:54
answered Feb 27 at 17:48
gunesgunes
6,7151215
6,7151215
$begingroup$
By log-reg, do you mean a logistic regression model? Thanks for your help by the way!
$endgroup$
– user239276
Feb 27 at 17:53
$begingroup$
yes, sorry for ambiguity.
$endgroup$
– gunes
Feb 27 at 17:54
1
$begingroup$
@gunes This might be a bit too much of an answer for a self-study question, although I don't typically police those here and am not certain where exactly the community falls on these sorts of questions besides what is included in the tag info.
$endgroup$
– Bryan Krause
Feb 27 at 17:55
2
$begingroup$
(+1) It's worth noting that this is essentially using a very simple Radial Basis Network with logistic loss
$endgroup$
– Cliff AB
Feb 27 at 18:22
1
$begingroup$
It may be worth noting that this will cause the logistic regression to not converge! The parameter estimate for R will tend to infinity!
$endgroup$
– Matthew Drury
Feb 27 at 21:18
add a comment |
$begingroup$
By log-reg, do you mean a logistic regression model? Thanks for your help by the way!
$endgroup$
– user239276
Feb 27 at 17:53
$begingroup$
yes, sorry for ambiguity.
$endgroup$
– gunes
Feb 27 at 17:54
1
$begingroup$
@gunes This might be a bit too much of an answer for a self-study question, although I don't typically police those here and am not certain where exactly the community falls on these sorts of questions besides what is included in the tag info.
$endgroup$
– Bryan Krause
Feb 27 at 17:55
2
$begingroup$
(+1) It's worth noting that this is essentially using a very simple Radial Basis Network with logistic loss
$endgroup$
– Cliff AB
Feb 27 at 18:22
1
$begingroup$
It may be worth noting that this will cause the logistic regression to not converge! The parameter estimate for R will tend to infinity!
$endgroup$
– Matthew Drury
Feb 27 at 21:18
$begingroup$
By log-reg, do you mean a logistic regression model? Thanks for your help by the way!
$endgroup$
– user239276
Feb 27 at 17:53
$begingroup$
By log-reg, do you mean a logistic regression model? Thanks for your help by the way!
$endgroup$
– user239276
Feb 27 at 17:53
$begingroup$
yes, sorry for ambiguity.
$endgroup$
– gunes
Feb 27 at 17:54
$begingroup$
yes, sorry for ambiguity.
$endgroup$
– gunes
Feb 27 at 17:54
1
1
$begingroup$
@gunes This might be a bit too much of an answer for a self-study question, although I don't typically police those here and am not certain where exactly the community falls on these sorts of questions besides what is included in the tag info.
$endgroup$
– Bryan Krause
Feb 27 at 17:55
$begingroup$
@gunes This might be a bit too much of an answer for a self-study question, although I don't typically police those here and am not certain where exactly the community falls on these sorts of questions besides what is included in the tag info.
$endgroup$
– Bryan Krause
Feb 27 at 17:55
2
2
$begingroup$
(+1) It's worth noting that this is essentially using a very simple Radial Basis Network with logistic loss
$endgroup$
– Cliff AB
Feb 27 at 18:22
$begingroup$
(+1) It's worth noting that this is essentially using a very simple Radial Basis Network with logistic loss
$endgroup$
– Cliff AB
Feb 27 at 18:22
1
1
$begingroup$
It may be worth noting that this will cause the logistic regression to not converge! The parameter estimate for R will tend to infinity!
$endgroup$
– Matthew Drury
Feb 27 at 21:18
$begingroup$
It may be worth noting that this will cause the logistic regression to not converge! The parameter estimate for R will tend to infinity!
$endgroup$
– Matthew Drury
Feb 27 at 21:18
add a comment |
protected by gung♦ Feb 28 at 4:02
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
1
$begingroup$
Is this from a book? If so can you share the title?
$endgroup$
– grayQuant
Feb 27 at 20:09