Gradient Descent: Cost Function












2














I'm trying to implement the gradient descent method for the problem of minimising the following function:



$$f(x) = frac{1}{2}(x-m)^{T}A(x-m)-sumlimits_{i=1}^nlogleft(x_i^{2}right),$$



where $x in R^n$ is a vector; $m in R^n$ is a fixed vector; and $A$ is a fixed positive definite matrix.



The only applications of gradient descent I have come across is for linear regression! So, as a starting point for helping me to solve this, I'd like to know in what situations this cost function would be applied. Does anyone out there recognise it?










share|cite|improve this question
























  • Your objective function is a special case of a quadratic cost function with a regularization term that uses the $log$ of the coefficients. $L_1$ or $L_2$ regularization cost functions are more typical, but you can impose a gentler penalty with a $log$ cost function.
    – Aditya Dua
    Nov 12 at 22:25






  • 1




    Thanks @AdityaDua. This response has been useful to me
    – Barton
    Nov 14 at 21:52












  • @AdityaDua do you know what the role of the matrix A would be in this cost function?
    – Barton
    Nov 17 at 17:48












  • I'd suggest reading up on "generalised least squares". The matrix A allows you to deal with correlated errors and unequal variances (Heteroscedasticity).
    – Aditya Dua
    Nov 18 at 3:09
















2














I'm trying to implement the gradient descent method for the problem of minimising the following function:



$$f(x) = frac{1}{2}(x-m)^{T}A(x-m)-sumlimits_{i=1}^nlogleft(x_i^{2}right),$$



where $x in R^n$ is a vector; $m in R^n$ is a fixed vector; and $A$ is a fixed positive definite matrix.



The only applications of gradient descent I have come across is for linear regression! So, as a starting point for helping me to solve this, I'd like to know in what situations this cost function would be applied. Does anyone out there recognise it?










share|cite|improve this question
























  • Your objective function is a special case of a quadratic cost function with a regularization term that uses the $log$ of the coefficients. $L_1$ or $L_2$ regularization cost functions are more typical, but you can impose a gentler penalty with a $log$ cost function.
    – Aditya Dua
    Nov 12 at 22:25






  • 1




    Thanks @AdityaDua. This response has been useful to me
    – Barton
    Nov 14 at 21:52












  • @AdityaDua do you know what the role of the matrix A would be in this cost function?
    – Barton
    Nov 17 at 17:48












  • I'd suggest reading up on "generalised least squares". The matrix A allows you to deal with correlated errors and unequal variances (Heteroscedasticity).
    – Aditya Dua
    Nov 18 at 3:09














2












2








2







I'm trying to implement the gradient descent method for the problem of minimising the following function:



$$f(x) = frac{1}{2}(x-m)^{T}A(x-m)-sumlimits_{i=1}^nlogleft(x_i^{2}right),$$



where $x in R^n$ is a vector; $m in R^n$ is a fixed vector; and $A$ is a fixed positive definite matrix.



The only applications of gradient descent I have come across is for linear regression! So, as a starting point for helping me to solve this, I'd like to know in what situations this cost function would be applied. Does anyone out there recognise it?










share|cite|improve this question















I'm trying to implement the gradient descent method for the problem of minimising the following function:



$$f(x) = frac{1}{2}(x-m)^{T}A(x-m)-sumlimits_{i=1}^nlogleft(x_i^{2}right),$$



where $x in R^n$ is a vector; $m in R^n$ is a fixed vector; and $A$ is a fixed positive definite matrix.



The only applications of gradient descent I have come across is for linear regression! So, as a starting point for helping me to solve this, I'd like to know in what situations this cost function would be applied. Does anyone out there recognise it?







gradient-descent






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 13 at 13:31









Davide Giraudo

125k16150259




125k16150259










asked Nov 12 at 20:13









Barton

133




133












  • Your objective function is a special case of a quadratic cost function with a regularization term that uses the $log$ of the coefficients. $L_1$ or $L_2$ regularization cost functions are more typical, but you can impose a gentler penalty with a $log$ cost function.
    – Aditya Dua
    Nov 12 at 22:25






  • 1




    Thanks @AdityaDua. This response has been useful to me
    – Barton
    Nov 14 at 21:52












  • @AdityaDua do you know what the role of the matrix A would be in this cost function?
    – Barton
    Nov 17 at 17:48












  • I'd suggest reading up on "generalised least squares". The matrix A allows you to deal with correlated errors and unequal variances (Heteroscedasticity).
    – Aditya Dua
    Nov 18 at 3:09


















  • Your objective function is a special case of a quadratic cost function with a regularization term that uses the $log$ of the coefficients. $L_1$ or $L_2$ regularization cost functions are more typical, but you can impose a gentler penalty with a $log$ cost function.
    – Aditya Dua
    Nov 12 at 22:25






  • 1




    Thanks @AdityaDua. This response has been useful to me
    – Barton
    Nov 14 at 21:52












  • @AdityaDua do you know what the role of the matrix A would be in this cost function?
    – Barton
    Nov 17 at 17:48












  • I'd suggest reading up on "generalised least squares". The matrix A allows you to deal with correlated errors and unequal variances (Heteroscedasticity).
    – Aditya Dua
    Nov 18 at 3:09
















Your objective function is a special case of a quadratic cost function with a regularization term that uses the $log$ of the coefficients. $L_1$ or $L_2$ regularization cost functions are more typical, but you can impose a gentler penalty with a $log$ cost function.
– Aditya Dua
Nov 12 at 22:25




Your objective function is a special case of a quadratic cost function with a regularization term that uses the $log$ of the coefficients. $L_1$ or $L_2$ regularization cost functions are more typical, but you can impose a gentler penalty with a $log$ cost function.
– Aditya Dua
Nov 12 at 22:25




1




1




Thanks @AdityaDua. This response has been useful to me
– Barton
Nov 14 at 21:52






Thanks @AdityaDua. This response has been useful to me
– Barton
Nov 14 at 21:52














@AdityaDua do you know what the role of the matrix A would be in this cost function?
– Barton
Nov 17 at 17:48






@AdityaDua do you know what the role of the matrix A would be in this cost function?
– Barton
Nov 17 at 17:48














I'd suggest reading up on "generalised least squares". The matrix A allows you to deal with correlated errors and unequal variances (Heteroscedasticity).
– Aditya Dua
Nov 18 at 3:09




I'd suggest reading up on "generalised least squares". The matrix A allows you to deal with correlated errors and unequal variances (Heteroscedasticity).
– Aditya Dua
Nov 18 at 3:09










1 Answer
1






active

oldest

votes


















0














From the question, we have that f is a convex quadratic function



$$f(x) = frac{1}{2}(x-m)^{mathrm T} mathrm A (x-m)-sumlimits_{i=1}^nlogleft(x_i^{2}right),$$



where $x in R^n$ is a vector; $m in R^n$ is a fixed vector, and $A$ is a fixed positive definite matrix (symmetric and positive semidefinite).



$$nabla f ( x) = frac{mathrm T+1}{2} mathrm A (x-m)^{mathrm T}-sumlimits_{i=1}^nleft(frac{2x}{x_i^{2}ln(10)}right)$$



Using gradient descent with step $mu$,



$$ x_{k+1} = x_k - mu nabla f ( x_k)$$



Choose $mu$ such that $f(x_k) < f(x_{k+1}) $,then do a loop until we find $x^{*}: f(x^{*}_k) - f(x^{*}_{k+1}) sim 0$



This is a general way to take gradient descent for a convex quadratic function in n-dimensional space. Hope it is helpful.






share|cite|improve this answer























    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2995797%2fgradient-descent-cost-function%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    From the question, we have that f is a convex quadratic function



    $$f(x) = frac{1}{2}(x-m)^{mathrm T} mathrm A (x-m)-sumlimits_{i=1}^nlogleft(x_i^{2}right),$$



    where $x in R^n$ is a vector; $m in R^n$ is a fixed vector, and $A$ is a fixed positive definite matrix (symmetric and positive semidefinite).



    $$nabla f ( x) = frac{mathrm T+1}{2} mathrm A (x-m)^{mathrm T}-sumlimits_{i=1}^nleft(frac{2x}{x_i^{2}ln(10)}right)$$



    Using gradient descent with step $mu$,



    $$ x_{k+1} = x_k - mu nabla f ( x_k)$$



    Choose $mu$ such that $f(x_k) < f(x_{k+1}) $,then do a loop until we find $x^{*}: f(x^{*}_k) - f(x^{*}_{k+1}) sim 0$



    This is a general way to take gradient descent for a convex quadratic function in n-dimensional space. Hope it is helpful.






    share|cite|improve this answer




























      0














      From the question, we have that f is a convex quadratic function



      $$f(x) = frac{1}{2}(x-m)^{mathrm T} mathrm A (x-m)-sumlimits_{i=1}^nlogleft(x_i^{2}right),$$



      where $x in R^n$ is a vector; $m in R^n$ is a fixed vector, and $A$ is a fixed positive definite matrix (symmetric and positive semidefinite).



      $$nabla f ( x) = frac{mathrm T+1}{2} mathrm A (x-m)^{mathrm T}-sumlimits_{i=1}^nleft(frac{2x}{x_i^{2}ln(10)}right)$$



      Using gradient descent with step $mu$,



      $$ x_{k+1} = x_k - mu nabla f ( x_k)$$



      Choose $mu$ such that $f(x_k) < f(x_{k+1}) $,then do a loop until we find $x^{*}: f(x^{*}_k) - f(x^{*}_{k+1}) sim 0$



      This is a general way to take gradient descent for a convex quadratic function in n-dimensional space. Hope it is helpful.






      share|cite|improve this answer


























        0












        0








        0






        From the question, we have that f is a convex quadratic function



        $$f(x) = frac{1}{2}(x-m)^{mathrm T} mathrm A (x-m)-sumlimits_{i=1}^nlogleft(x_i^{2}right),$$



        where $x in R^n$ is a vector; $m in R^n$ is a fixed vector, and $A$ is a fixed positive definite matrix (symmetric and positive semidefinite).



        $$nabla f ( x) = frac{mathrm T+1}{2} mathrm A (x-m)^{mathrm T}-sumlimits_{i=1}^nleft(frac{2x}{x_i^{2}ln(10)}right)$$



        Using gradient descent with step $mu$,



        $$ x_{k+1} = x_k - mu nabla f ( x_k)$$



        Choose $mu$ such that $f(x_k) < f(x_{k+1}) $,then do a loop until we find $x^{*}: f(x^{*}_k) - f(x^{*}_{k+1}) sim 0$



        This is a general way to take gradient descent for a convex quadratic function in n-dimensional space. Hope it is helpful.






        share|cite|improve this answer














        From the question, we have that f is a convex quadratic function



        $$f(x) = frac{1}{2}(x-m)^{mathrm T} mathrm A (x-m)-sumlimits_{i=1}^nlogleft(x_i^{2}right),$$



        where $x in R^n$ is a vector; $m in R^n$ is a fixed vector, and $A$ is a fixed positive definite matrix (symmetric and positive semidefinite).



        $$nabla f ( x) = frac{mathrm T+1}{2} mathrm A (x-m)^{mathrm T}-sumlimits_{i=1}^nleft(frac{2x}{x_i^{2}ln(10)}right)$$



        Using gradient descent with step $mu$,



        $$ x_{k+1} = x_k - mu nabla f ( x_k)$$



        Choose $mu$ such that $f(x_k) < f(x_{k+1}) $,then do a loop until we find $x^{*}: f(x^{*}_k) - f(x^{*}_{k+1}) sim 0$



        This is a general way to take gradient descent for a convex quadratic function in n-dimensional space. Hope it is helpful.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Nov 18 at 16:45

























        answered Nov 18 at 16:39









        AnNg

        355




        355






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2995797%2fgradient-descent-cost-function%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            QoS: MAC-Priority for clients behind a repeater

            Ивакино (Тотемский район)

            Can't locate Autom4te/ChannelDefs.pm in @INC (when it definitely is there)