Intuition behind logloss function





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty{ margin-bottom:0;
}






up vote
2
down vote

favorite












I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.



The images below visualize my question to some extend:



enter image description hereenter image description hereenter image description here



Your advice will be appreciated!










share|cite|improve this question


















  • 1




    I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
    – Cowboy Trader
    Nov 27 at 8:48

















up vote
2
down vote

favorite












I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.



The images below visualize my question to some extend:



enter image description hereenter image description hereenter image description here



Your advice will be appreciated!










share|cite|improve this question


















  • 1




    I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
    – Cowboy Trader
    Nov 27 at 8:48













up vote
2
down vote

favorite









up vote
2
down vote

favorite











I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.



The images below visualize my question to some extend:



enter image description hereenter image description hereenter image description here



Your advice will be appreciated!










share|cite|improve this question













I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.



The images below visualize my question to some extend:



enter image description hereenter image description hereenter image description here



Your advice will be appreciated!







log-loss






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Nov 27 at 8:40









user8270077

1462




1462








  • 1




    I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
    – Cowboy Trader
    Nov 27 at 8:48














  • 1




    I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
    – Cowboy Trader
    Nov 27 at 8:48








1




1




I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48




I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48










1 Answer
1






active

oldest

votes

















up vote
7
down vote













The formula you used, seems to be



$$
H(X) = -P(X)log P(X),
$$



the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as



$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$



where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.






share|cite|improve this answer























    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "65"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f378979%2fintuition-behind-logloss-function%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    7
    down vote













    The formula you used, seems to be



    $$
    H(X) = -P(X)log P(X),
    $$



    the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as



    $$
    L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
    $$



    where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.






    share|cite|improve this answer



























      up vote
      7
      down vote













      The formula you used, seems to be



      $$
      H(X) = -P(X)log P(X),
      $$



      the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as



      $$
      L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
      $$



      where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.






      share|cite|improve this answer

























        up vote
        7
        down vote










        up vote
        7
        down vote









        The formula you used, seems to be



        $$
        H(X) = -P(X)log P(X),
        $$



        the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as



        $$
        L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
        $$



        where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.






        share|cite|improve this answer














        The formula you used, seems to be



        $$
        H(X) = -P(X)log P(X),
        $$



        the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as



        $$
        L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
        $$



        where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Nov 27 at 9:27

























        answered Nov 27 at 9:18









        Tim

        54.7k9122211




        54.7k9122211






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f378979%2fintuition-behind-logloss-function%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            QoS: MAC-Priority for clients behind a repeater

            Ивакино (Тотемский район)

            Can't locate Autom4te/ChannelDefs.pm in @INC (when it definitely is there)