Intuition behind logloss function
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty{ margin-bottom:0;
}
up vote
2
down vote
favorite
I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.
The images below visualize my question to some extend:
Your advice will be appreciated!
log-loss
add a comment |
up vote
2
down vote
favorite
I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.
The images below visualize my question to some extend:
Your advice will be appreciated!
log-loss
1
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48
add a comment |
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.
The images below visualize my question to some extend:
Your advice will be appreciated!
log-loss
I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.
The images below visualize my question to some extend:
Your advice will be appreciated!
log-loss
log-loss
asked Nov 27 at 8:40
user8270077
1462
1462
1
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48
add a comment |
1
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48
1
1
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48
add a comment |
1 Answer
1
active
oldest
votes
up vote
7
down vote
The formula you used, seems to be
$$
H(X) = -P(X)log P(X),
$$
the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as
$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$
where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
7
down vote
The formula you used, seems to be
$$
H(X) = -P(X)log P(X),
$$
the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as
$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$
where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.
add a comment |
up vote
7
down vote
The formula you used, seems to be
$$
H(X) = -P(X)log P(X),
$$
the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as
$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$
where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.
add a comment |
up vote
7
down vote
up vote
7
down vote
The formula you used, seems to be
$$
H(X) = -P(X)log P(X),
$$
the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as
$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$
where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.
The formula you used, seems to be
$$
H(X) = -P(X)log P(X),
$$
the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as
$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$
where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.
edited Nov 27 at 9:27
answered Nov 27 at 9:18
Tim♦
54.7k9122211
54.7k9122211
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f378979%2fintuition-behind-logloss-function%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48