Showing $int_{mathbb R} mid F(x)-G(x)mid dx = int_0^1 mid F^{-1}(u)-G^{-1}(u)mid du$ with $F$, $G$ CDF...
up vote
4
down vote
favorite
Let's $X$ and $Y$ have CDF functions admitting moment of order $1$.
Let's be $F$ cdf of $X$ and $G$ cdf of $Y$.
I want to show that $$int_{mathbb R} mid F(x)-G(x)mid dx = int_{0}^{1} mid F^{-1}(u)-G^{-1}(u)mid du,.$$
probability probability-theory probability-distributions definite-integrals inverse-function
add a comment |
up vote
4
down vote
favorite
Let's $X$ and $Y$ have CDF functions admitting moment of order $1$.
Let's be $F$ cdf of $X$ and $G$ cdf of $Y$.
I want to show that $$int_{mathbb R} mid F(x)-G(x)mid dx = int_{0}^{1} mid F^{-1}(u)-G^{-1}(u)mid du,.$$
probability probability-theory probability-distributions definite-integrals inverse-function
What is $F^{-1}=$?
– Daniel Camarena Perez
Nov 17 at 13:09
$F^{-1}$ is the quantile function it is the inverse of the CDF. In this case, since i don't have an explicit CDF i can't know ecplicitely$F^{-1}$
– Farouk Deutsch
Nov 17 at 13:19
1
I think that en.wikipedia.org/wiki/Integral_of_inverse_functions would be helpful.
– irchans
Nov 17 at 15:19
thank you i now visualizing the thing but i'm still stuck to prove it with words
– Farouk Deutsch
Nov 17 at 16:07
add a comment |
up vote
4
down vote
favorite
up vote
4
down vote
favorite
Let's $X$ and $Y$ have CDF functions admitting moment of order $1$.
Let's be $F$ cdf of $X$ and $G$ cdf of $Y$.
I want to show that $$int_{mathbb R} mid F(x)-G(x)mid dx = int_{0}^{1} mid F^{-1}(u)-G^{-1}(u)mid du,.$$
probability probability-theory probability-distributions definite-integrals inverse-function
Let's $X$ and $Y$ have CDF functions admitting moment of order $1$.
Let's be $F$ cdf of $X$ and $G$ cdf of $Y$.
I want to show that $$int_{mathbb R} mid F(x)-G(x)mid dx = int_{0}^{1} mid F^{-1}(u)-G^{-1}(u)mid du,.$$
probability probability-theory probability-distributions definite-integrals inverse-function
probability probability-theory probability-distributions definite-integrals inverse-function
edited Nov 17 at 17:18
Batominovski
31.8k23190
31.8k23190
asked Nov 17 at 12:59
Farouk Deutsch
1189
1189
What is $F^{-1}=$?
– Daniel Camarena Perez
Nov 17 at 13:09
$F^{-1}$ is the quantile function it is the inverse of the CDF. In this case, since i don't have an explicit CDF i can't know ecplicitely$F^{-1}$
– Farouk Deutsch
Nov 17 at 13:19
1
I think that en.wikipedia.org/wiki/Integral_of_inverse_functions would be helpful.
– irchans
Nov 17 at 15:19
thank you i now visualizing the thing but i'm still stuck to prove it with words
– Farouk Deutsch
Nov 17 at 16:07
add a comment |
What is $F^{-1}=$?
– Daniel Camarena Perez
Nov 17 at 13:09
$F^{-1}$ is the quantile function it is the inverse of the CDF. In this case, since i don't have an explicit CDF i can't know ecplicitely$F^{-1}$
– Farouk Deutsch
Nov 17 at 13:19
1
I think that en.wikipedia.org/wiki/Integral_of_inverse_functions would be helpful.
– irchans
Nov 17 at 15:19
thank you i now visualizing the thing but i'm still stuck to prove it with words
– Farouk Deutsch
Nov 17 at 16:07
What is $F^{-1}=$?
– Daniel Camarena Perez
Nov 17 at 13:09
What is $F^{-1}=$?
– Daniel Camarena Perez
Nov 17 at 13:09
$F^{-1}$ is the quantile function it is the inverse of the CDF. In this case, since i don't have an explicit CDF i can't know ecplicitely$F^{-1}$
– Farouk Deutsch
Nov 17 at 13:19
$F^{-1}$ is the quantile function it is the inverse of the CDF. In this case, since i don't have an explicit CDF i can't know ecplicitely$F^{-1}$
– Farouk Deutsch
Nov 17 at 13:19
1
1
I think that en.wikipedia.org/wiki/Integral_of_inverse_functions would be helpful.
– irchans
Nov 17 at 15:19
I think that en.wikipedia.org/wiki/Integral_of_inverse_functions would be helpful.
– irchans
Nov 17 at 15:19
thank you i now visualizing the thing but i'm still stuck to prove it with words
– Farouk Deutsch
Nov 17 at 16:07
thank you i now visualizing the thing but i'm still stuck to prove it with words
– Farouk Deutsch
Nov 17 at 16:07
add a comment |
3 Answers
3
active
oldest
votes
up vote
2
down vote
accepted
If $mu$ is 2D Lebesgue measure, then interpreting the integral as the unsigned area$^*$ between $F$ and $G$,
$$int |F(x) - G(x)| dx = mubig[(x,y)inmathbb Rtimes [0,1]:G(x)le y<F(x)big] + mubig[(x,y)inmathbb Rtimes [0,1]:F(x)le y<G(x)big ] $$
Then note that
$$G(x) le y < F(x) iff x le G^{-1}(y) , F^{-1}(y)<x iff F^{-1}(y)<x le G^{-1}(y)$$
and similarly $ F(x) le y < G(x) iff G^{-1}(y) < x le F^{-1}(y)$.
thus
$$int |F(x) - G(x)| dx = mubig[(x,y)inmathbb Rtimes [0,1]:F^{-1}(y)<x le G^{-1}(y)big] + mubig[(x,y)inmathbb Rtimes [0,1]:G^{-1}(y) < x le F^{-1}(y)big ] $$
returning to the 1D integral notation, this is saying that
$$ int_{mathbb R} |F(x) - G(x)| dx = int_0^1 |F^{-1}(y) - G^{-1}(y) | dy $$
Finally a graph - this indicates that the result should be true even for some functions without an inverse. ( desmos link )
$^*$ For a positive function $f$, $int_A f(x) dx = int_A int_0^{f(x)} dydx = mu( (x,y) in Atimes operatorname{im}f : 0le yle f(x)).$ Appropriate case analysis leads to the above expression.
1
If I didn't make any mistake, then your guess is correct. The claim works even for $F$ and $G$ without inverses. See my answer.
– Batominovski
Nov 17 at 17:16
@Batominovski I think you didn't make a mistake :)
– Calvin Khor
Nov 17 at 17:20
add a comment |
up vote
3
down vote
This answer is inspired by Calvin Khor's solution. Here, we do not assume that $F$ and $G$ possess inverse functions. In this answer, we define $T^{-1}:(0,1)to mathbb{R}$ as
$$T^{-1}(u):=supbig{vinmathbb{R},|,T(v)leq ubig}$$
for any cumulative distribution function $T:mathbb{R}to[0,1]$. Since $T^{-1}$ is nondecreasing, it is a measurable function. We first note that, if $T$ admits the first moment, then $$int_{-infty}^0,T(x),text{d}x+int_0^{+infty},big(1-T(x)big),text{d}x=int_mathbb{R},|x|,text{d}T(x)<infty,,$$
so we have
$$int_{-infty}^0,T(x),text{d}x<inftytext{ and }int_0^{+infty},big(1-T(x)big),text{d}x<infty,.tag{*}$$
Now, because $F$ and $G$ admit the first moments, the integral
$$I:=int_mathbb{R},big|F(x)-G(x)big|,text{d}x$$
is finite due to (*). From Calvin Khor's answer, we have
$$I=mu(E^+)+mu(E^-),,$$ where $mu$ is the Lebesgue measure on $mathbb{R}^2$,
$$E^+:=big{(x,y)inmathbb{R}times (0,1),|,G(x)leq y<F(x)big},,$$
and
$$E^-:=big{(x,y)inmathbb{R}times (0,1),|,F(x)leq y<G(x)big},.$$
Observe that
$$E^+subseteq S^+:=big{(x,y)inmathbb{R}times (0,1),|,F^{-1}(y)leq x
leq G^{-1}(y)big}$$
and
$$E^-subseteq S^-:=big{(x,y)inmathbb{R}times (0,1),|,G^{-1}(y)leq x
leq F^{-1}(y)big},.$$
Note that $$S^+setminus E^+subseteq big{(x,y)inmathbb{R}times (0,1),|,xtext{ is the unique solution to }F(x)=ybig}$$ and $$S^-setminus E^-subseteq big{(x,y)inmathbb{R}times (0,1),|,xtext{ is the unique solution to }G(x)=ybig}$$ are of Lebesgue measure $0$. Therefore,
$$I=mu(S^+)+mu(S^-)=int_0^1,big|F^{-1}(u)-G^{-1}(u)big|,text{d}u,.$$
The proof still works, by the way, if we instead define $T^{-1}:(0,1)tomathbb{R}$ to be $$T^{-1}(u):=infbig{vinmathbb{R},|,T(v)geq ubig},,$$ for each distribution function $T$.
– Batominovski
Nov 17 at 17:11
add a comment |
up vote
2
down vote
Since you use the notation $F^{-1}$, resp. $G^{-1}$, for the quantile function you tacitly assume that $F$ and $G$ are continuous and strictly increasing on some interval $Jsubset{mathbb R}$. I suggest you draw a figure showing two reasonable such functions. The left hand side of the claimed formula then represents the unsigned area enclosed between the graphs of $F$ and $G$. Turning the figure $90^circ$ you then can verify that the right hand side of the claimed formula is the same area.
This means that one has to prove that
$$A:=bigl{(x,u)in Jtimes[0,1]bigm|min{F(x),G(x)}leq uleqmax{F(x),G(x)}bigr}$$
and
$$A':=bigl{(x,u)in Jtimes[0,1]bigm|min{F^{-1}(u),G^{-1}(u)}leq xleqmax{F^{-1}(u),G^{-1}(u)}bigr}$$
are in fact the same sets. This is "pure logic"; one just has to go through the motions.
add a comment |
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
If $mu$ is 2D Lebesgue measure, then interpreting the integral as the unsigned area$^*$ between $F$ and $G$,
$$int |F(x) - G(x)| dx = mubig[(x,y)inmathbb Rtimes [0,1]:G(x)le y<F(x)big] + mubig[(x,y)inmathbb Rtimes [0,1]:F(x)le y<G(x)big ] $$
Then note that
$$G(x) le y < F(x) iff x le G^{-1}(y) , F^{-1}(y)<x iff F^{-1}(y)<x le G^{-1}(y)$$
and similarly $ F(x) le y < G(x) iff G^{-1}(y) < x le F^{-1}(y)$.
thus
$$int |F(x) - G(x)| dx = mubig[(x,y)inmathbb Rtimes [0,1]:F^{-1}(y)<x le G^{-1}(y)big] + mubig[(x,y)inmathbb Rtimes [0,1]:G^{-1}(y) < x le F^{-1}(y)big ] $$
returning to the 1D integral notation, this is saying that
$$ int_{mathbb R} |F(x) - G(x)| dx = int_0^1 |F^{-1}(y) - G^{-1}(y) | dy $$
Finally a graph - this indicates that the result should be true even for some functions without an inverse. ( desmos link )
$^*$ For a positive function $f$, $int_A f(x) dx = int_A int_0^{f(x)} dydx = mu( (x,y) in Atimes operatorname{im}f : 0le yle f(x)).$ Appropriate case analysis leads to the above expression.
1
If I didn't make any mistake, then your guess is correct. The claim works even for $F$ and $G$ without inverses. See my answer.
– Batominovski
Nov 17 at 17:16
@Batominovski I think you didn't make a mistake :)
– Calvin Khor
Nov 17 at 17:20
add a comment |
up vote
2
down vote
accepted
If $mu$ is 2D Lebesgue measure, then interpreting the integral as the unsigned area$^*$ between $F$ and $G$,
$$int |F(x) - G(x)| dx = mubig[(x,y)inmathbb Rtimes [0,1]:G(x)le y<F(x)big] + mubig[(x,y)inmathbb Rtimes [0,1]:F(x)le y<G(x)big ] $$
Then note that
$$G(x) le y < F(x) iff x le G^{-1}(y) , F^{-1}(y)<x iff F^{-1}(y)<x le G^{-1}(y)$$
and similarly $ F(x) le y < G(x) iff G^{-1}(y) < x le F^{-1}(y)$.
thus
$$int |F(x) - G(x)| dx = mubig[(x,y)inmathbb Rtimes [0,1]:F^{-1}(y)<x le G^{-1}(y)big] + mubig[(x,y)inmathbb Rtimes [0,1]:G^{-1}(y) < x le F^{-1}(y)big ] $$
returning to the 1D integral notation, this is saying that
$$ int_{mathbb R} |F(x) - G(x)| dx = int_0^1 |F^{-1}(y) - G^{-1}(y) | dy $$
Finally a graph - this indicates that the result should be true even for some functions without an inverse. ( desmos link )
$^*$ For a positive function $f$, $int_A f(x) dx = int_A int_0^{f(x)} dydx = mu( (x,y) in Atimes operatorname{im}f : 0le yle f(x)).$ Appropriate case analysis leads to the above expression.
1
If I didn't make any mistake, then your guess is correct. The claim works even for $F$ and $G$ without inverses. See my answer.
– Batominovski
Nov 17 at 17:16
@Batominovski I think you didn't make a mistake :)
– Calvin Khor
Nov 17 at 17:20
add a comment |
up vote
2
down vote
accepted
up vote
2
down vote
accepted
If $mu$ is 2D Lebesgue measure, then interpreting the integral as the unsigned area$^*$ between $F$ and $G$,
$$int |F(x) - G(x)| dx = mubig[(x,y)inmathbb Rtimes [0,1]:G(x)le y<F(x)big] + mubig[(x,y)inmathbb Rtimes [0,1]:F(x)le y<G(x)big ] $$
Then note that
$$G(x) le y < F(x) iff x le G^{-1}(y) , F^{-1}(y)<x iff F^{-1}(y)<x le G^{-1}(y)$$
and similarly $ F(x) le y < G(x) iff G^{-1}(y) < x le F^{-1}(y)$.
thus
$$int |F(x) - G(x)| dx = mubig[(x,y)inmathbb Rtimes [0,1]:F^{-1}(y)<x le G^{-1}(y)big] + mubig[(x,y)inmathbb Rtimes [0,1]:G^{-1}(y) < x le F^{-1}(y)big ] $$
returning to the 1D integral notation, this is saying that
$$ int_{mathbb R} |F(x) - G(x)| dx = int_0^1 |F^{-1}(y) - G^{-1}(y) | dy $$
Finally a graph - this indicates that the result should be true even for some functions without an inverse. ( desmos link )
$^*$ For a positive function $f$, $int_A f(x) dx = int_A int_0^{f(x)} dydx = mu( (x,y) in Atimes operatorname{im}f : 0le yle f(x)).$ Appropriate case analysis leads to the above expression.
If $mu$ is 2D Lebesgue measure, then interpreting the integral as the unsigned area$^*$ between $F$ and $G$,
$$int |F(x) - G(x)| dx = mubig[(x,y)inmathbb Rtimes [0,1]:G(x)le y<F(x)big] + mubig[(x,y)inmathbb Rtimes [0,1]:F(x)le y<G(x)big ] $$
Then note that
$$G(x) le y < F(x) iff x le G^{-1}(y) , F^{-1}(y)<x iff F^{-1}(y)<x le G^{-1}(y)$$
and similarly $ F(x) le y < G(x) iff G^{-1}(y) < x le F^{-1}(y)$.
thus
$$int |F(x) - G(x)| dx = mubig[(x,y)inmathbb Rtimes [0,1]:F^{-1}(y)<x le G^{-1}(y)big] + mubig[(x,y)inmathbb Rtimes [0,1]:G^{-1}(y) < x le F^{-1}(y)big ] $$
returning to the 1D integral notation, this is saying that
$$ int_{mathbb R} |F(x) - G(x)| dx = int_0^1 |F^{-1}(y) - G^{-1}(y) | dy $$
Finally a graph - this indicates that the result should be true even for some functions without an inverse. ( desmos link )
$^*$ For a positive function $f$, $int_A f(x) dx = int_A int_0^{f(x)} dydx = mu( (x,y) in Atimes operatorname{im}f : 0le yle f(x)).$ Appropriate case analysis leads to the above expression.
edited Nov 17 at 16:35
answered Nov 17 at 16:18
Calvin Khor
10.8k21437
10.8k21437
1
If I didn't make any mistake, then your guess is correct. The claim works even for $F$ and $G$ without inverses. See my answer.
– Batominovski
Nov 17 at 17:16
@Batominovski I think you didn't make a mistake :)
– Calvin Khor
Nov 17 at 17:20
add a comment |
1
If I didn't make any mistake, then your guess is correct. The claim works even for $F$ and $G$ without inverses. See my answer.
– Batominovski
Nov 17 at 17:16
@Batominovski I think you didn't make a mistake :)
– Calvin Khor
Nov 17 at 17:20
1
1
If I didn't make any mistake, then your guess is correct. The claim works even for $F$ and $G$ without inverses. See my answer.
– Batominovski
Nov 17 at 17:16
If I didn't make any mistake, then your guess is correct. The claim works even for $F$ and $G$ without inverses. See my answer.
– Batominovski
Nov 17 at 17:16
@Batominovski I think you didn't make a mistake :)
– Calvin Khor
Nov 17 at 17:20
@Batominovski I think you didn't make a mistake :)
– Calvin Khor
Nov 17 at 17:20
add a comment |
up vote
3
down vote
This answer is inspired by Calvin Khor's solution. Here, we do not assume that $F$ and $G$ possess inverse functions. In this answer, we define $T^{-1}:(0,1)to mathbb{R}$ as
$$T^{-1}(u):=supbig{vinmathbb{R},|,T(v)leq ubig}$$
for any cumulative distribution function $T:mathbb{R}to[0,1]$. Since $T^{-1}$ is nondecreasing, it is a measurable function. We first note that, if $T$ admits the first moment, then $$int_{-infty}^0,T(x),text{d}x+int_0^{+infty},big(1-T(x)big),text{d}x=int_mathbb{R},|x|,text{d}T(x)<infty,,$$
so we have
$$int_{-infty}^0,T(x),text{d}x<inftytext{ and }int_0^{+infty},big(1-T(x)big),text{d}x<infty,.tag{*}$$
Now, because $F$ and $G$ admit the first moments, the integral
$$I:=int_mathbb{R},big|F(x)-G(x)big|,text{d}x$$
is finite due to (*). From Calvin Khor's answer, we have
$$I=mu(E^+)+mu(E^-),,$$ where $mu$ is the Lebesgue measure on $mathbb{R}^2$,
$$E^+:=big{(x,y)inmathbb{R}times (0,1),|,G(x)leq y<F(x)big},,$$
and
$$E^-:=big{(x,y)inmathbb{R}times (0,1),|,F(x)leq y<G(x)big},.$$
Observe that
$$E^+subseteq S^+:=big{(x,y)inmathbb{R}times (0,1),|,F^{-1}(y)leq x
leq G^{-1}(y)big}$$
and
$$E^-subseteq S^-:=big{(x,y)inmathbb{R}times (0,1),|,G^{-1}(y)leq x
leq F^{-1}(y)big},.$$
Note that $$S^+setminus E^+subseteq big{(x,y)inmathbb{R}times (0,1),|,xtext{ is the unique solution to }F(x)=ybig}$$ and $$S^-setminus E^-subseteq big{(x,y)inmathbb{R}times (0,1),|,xtext{ is the unique solution to }G(x)=ybig}$$ are of Lebesgue measure $0$. Therefore,
$$I=mu(S^+)+mu(S^-)=int_0^1,big|F^{-1}(u)-G^{-1}(u)big|,text{d}u,.$$
The proof still works, by the way, if we instead define $T^{-1}:(0,1)tomathbb{R}$ to be $$T^{-1}(u):=infbig{vinmathbb{R},|,T(v)geq ubig},,$$ for each distribution function $T$.
– Batominovski
Nov 17 at 17:11
add a comment |
up vote
3
down vote
This answer is inspired by Calvin Khor's solution. Here, we do not assume that $F$ and $G$ possess inverse functions. In this answer, we define $T^{-1}:(0,1)to mathbb{R}$ as
$$T^{-1}(u):=supbig{vinmathbb{R},|,T(v)leq ubig}$$
for any cumulative distribution function $T:mathbb{R}to[0,1]$. Since $T^{-1}$ is nondecreasing, it is a measurable function. We first note that, if $T$ admits the first moment, then $$int_{-infty}^0,T(x),text{d}x+int_0^{+infty},big(1-T(x)big),text{d}x=int_mathbb{R},|x|,text{d}T(x)<infty,,$$
so we have
$$int_{-infty}^0,T(x),text{d}x<inftytext{ and }int_0^{+infty},big(1-T(x)big),text{d}x<infty,.tag{*}$$
Now, because $F$ and $G$ admit the first moments, the integral
$$I:=int_mathbb{R},big|F(x)-G(x)big|,text{d}x$$
is finite due to (*). From Calvin Khor's answer, we have
$$I=mu(E^+)+mu(E^-),,$$ where $mu$ is the Lebesgue measure on $mathbb{R}^2$,
$$E^+:=big{(x,y)inmathbb{R}times (0,1),|,G(x)leq y<F(x)big},,$$
and
$$E^-:=big{(x,y)inmathbb{R}times (0,1),|,F(x)leq y<G(x)big},.$$
Observe that
$$E^+subseteq S^+:=big{(x,y)inmathbb{R}times (0,1),|,F^{-1}(y)leq x
leq G^{-1}(y)big}$$
and
$$E^-subseteq S^-:=big{(x,y)inmathbb{R}times (0,1),|,G^{-1}(y)leq x
leq F^{-1}(y)big},.$$
Note that $$S^+setminus E^+subseteq big{(x,y)inmathbb{R}times (0,1),|,xtext{ is the unique solution to }F(x)=ybig}$$ and $$S^-setminus E^-subseteq big{(x,y)inmathbb{R}times (0,1),|,xtext{ is the unique solution to }G(x)=ybig}$$ are of Lebesgue measure $0$. Therefore,
$$I=mu(S^+)+mu(S^-)=int_0^1,big|F^{-1}(u)-G^{-1}(u)big|,text{d}u,.$$
The proof still works, by the way, if we instead define $T^{-1}:(0,1)tomathbb{R}$ to be $$T^{-1}(u):=infbig{vinmathbb{R},|,T(v)geq ubig},,$$ for each distribution function $T$.
– Batominovski
Nov 17 at 17:11
add a comment |
up vote
3
down vote
up vote
3
down vote
This answer is inspired by Calvin Khor's solution. Here, we do not assume that $F$ and $G$ possess inverse functions. In this answer, we define $T^{-1}:(0,1)to mathbb{R}$ as
$$T^{-1}(u):=supbig{vinmathbb{R},|,T(v)leq ubig}$$
for any cumulative distribution function $T:mathbb{R}to[0,1]$. Since $T^{-1}$ is nondecreasing, it is a measurable function. We first note that, if $T$ admits the first moment, then $$int_{-infty}^0,T(x),text{d}x+int_0^{+infty},big(1-T(x)big),text{d}x=int_mathbb{R},|x|,text{d}T(x)<infty,,$$
so we have
$$int_{-infty}^0,T(x),text{d}x<inftytext{ and }int_0^{+infty},big(1-T(x)big),text{d}x<infty,.tag{*}$$
Now, because $F$ and $G$ admit the first moments, the integral
$$I:=int_mathbb{R},big|F(x)-G(x)big|,text{d}x$$
is finite due to (*). From Calvin Khor's answer, we have
$$I=mu(E^+)+mu(E^-),,$$ where $mu$ is the Lebesgue measure on $mathbb{R}^2$,
$$E^+:=big{(x,y)inmathbb{R}times (0,1),|,G(x)leq y<F(x)big},,$$
and
$$E^-:=big{(x,y)inmathbb{R}times (0,1),|,F(x)leq y<G(x)big},.$$
Observe that
$$E^+subseteq S^+:=big{(x,y)inmathbb{R}times (0,1),|,F^{-1}(y)leq x
leq G^{-1}(y)big}$$
and
$$E^-subseteq S^-:=big{(x,y)inmathbb{R}times (0,1),|,G^{-1}(y)leq x
leq F^{-1}(y)big},.$$
Note that $$S^+setminus E^+subseteq big{(x,y)inmathbb{R}times (0,1),|,xtext{ is the unique solution to }F(x)=ybig}$$ and $$S^-setminus E^-subseteq big{(x,y)inmathbb{R}times (0,1),|,xtext{ is the unique solution to }G(x)=ybig}$$ are of Lebesgue measure $0$. Therefore,
$$I=mu(S^+)+mu(S^-)=int_0^1,big|F^{-1}(u)-G^{-1}(u)big|,text{d}u,.$$
This answer is inspired by Calvin Khor's solution. Here, we do not assume that $F$ and $G$ possess inverse functions. In this answer, we define $T^{-1}:(0,1)to mathbb{R}$ as
$$T^{-1}(u):=supbig{vinmathbb{R},|,T(v)leq ubig}$$
for any cumulative distribution function $T:mathbb{R}to[0,1]$. Since $T^{-1}$ is nondecreasing, it is a measurable function. We first note that, if $T$ admits the first moment, then $$int_{-infty}^0,T(x),text{d}x+int_0^{+infty},big(1-T(x)big),text{d}x=int_mathbb{R},|x|,text{d}T(x)<infty,,$$
so we have
$$int_{-infty}^0,T(x),text{d}x<inftytext{ and }int_0^{+infty},big(1-T(x)big),text{d}x<infty,.tag{*}$$
Now, because $F$ and $G$ admit the first moments, the integral
$$I:=int_mathbb{R},big|F(x)-G(x)big|,text{d}x$$
is finite due to (*). From Calvin Khor's answer, we have
$$I=mu(E^+)+mu(E^-),,$$ where $mu$ is the Lebesgue measure on $mathbb{R}^2$,
$$E^+:=big{(x,y)inmathbb{R}times (0,1),|,G(x)leq y<F(x)big},,$$
and
$$E^-:=big{(x,y)inmathbb{R}times (0,1),|,F(x)leq y<G(x)big},.$$
Observe that
$$E^+subseteq S^+:=big{(x,y)inmathbb{R}times (0,1),|,F^{-1}(y)leq x
leq G^{-1}(y)big}$$
and
$$E^-subseteq S^-:=big{(x,y)inmathbb{R}times (0,1),|,G^{-1}(y)leq x
leq F^{-1}(y)big},.$$
Note that $$S^+setminus E^+subseteq big{(x,y)inmathbb{R}times (0,1),|,xtext{ is the unique solution to }F(x)=ybig}$$ and $$S^-setminus E^-subseteq big{(x,y)inmathbb{R}times (0,1),|,xtext{ is the unique solution to }G(x)=ybig}$$ are of Lebesgue measure $0$. Therefore,
$$I=mu(S^+)+mu(S^-)=int_0^1,big|F^{-1}(u)-G^{-1}(u)big|,text{d}u,.$$
edited Nov 18 at 15:38
answered Nov 17 at 17:07
Batominovski
31.8k23190
31.8k23190
The proof still works, by the way, if we instead define $T^{-1}:(0,1)tomathbb{R}$ to be $$T^{-1}(u):=infbig{vinmathbb{R},|,T(v)geq ubig},,$$ for each distribution function $T$.
– Batominovski
Nov 17 at 17:11
add a comment |
The proof still works, by the way, if we instead define $T^{-1}:(0,1)tomathbb{R}$ to be $$T^{-1}(u):=infbig{vinmathbb{R},|,T(v)geq ubig},,$$ for each distribution function $T$.
– Batominovski
Nov 17 at 17:11
The proof still works, by the way, if we instead define $T^{-1}:(0,1)tomathbb{R}$ to be $$T^{-1}(u):=infbig{vinmathbb{R},|,T(v)geq ubig},,$$ for each distribution function $T$.
– Batominovski
Nov 17 at 17:11
The proof still works, by the way, if we instead define $T^{-1}:(0,1)tomathbb{R}$ to be $$T^{-1}(u):=infbig{vinmathbb{R},|,T(v)geq ubig},,$$ for each distribution function $T$.
– Batominovski
Nov 17 at 17:11
add a comment |
up vote
2
down vote
Since you use the notation $F^{-1}$, resp. $G^{-1}$, for the quantile function you tacitly assume that $F$ and $G$ are continuous and strictly increasing on some interval $Jsubset{mathbb R}$. I suggest you draw a figure showing two reasonable such functions. The left hand side of the claimed formula then represents the unsigned area enclosed between the graphs of $F$ and $G$. Turning the figure $90^circ$ you then can verify that the right hand side of the claimed formula is the same area.
This means that one has to prove that
$$A:=bigl{(x,u)in Jtimes[0,1]bigm|min{F(x),G(x)}leq uleqmax{F(x),G(x)}bigr}$$
and
$$A':=bigl{(x,u)in Jtimes[0,1]bigm|min{F^{-1}(u),G^{-1}(u)}leq xleqmax{F^{-1}(u),G^{-1}(u)}bigr}$$
are in fact the same sets. This is "pure logic"; one just has to go through the motions.
add a comment |
up vote
2
down vote
Since you use the notation $F^{-1}$, resp. $G^{-1}$, for the quantile function you tacitly assume that $F$ and $G$ are continuous and strictly increasing on some interval $Jsubset{mathbb R}$. I suggest you draw a figure showing two reasonable such functions. The left hand side of the claimed formula then represents the unsigned area enclosed between the graphs of $F$ and $G$. Turning the figure $90^circ$ you then can verify that the right hand side of the claimed formula is the same area.
This means that one has to prove that
$$A:=bigl{(x,u)in Jtimes[0,1]bigm|min{F(x),G(x)}leq uleqmax{F(x),G(x)}bigr}$$
and
$$A':=bigl{(x,u)in Jtimes[0,1]bigm|min{F^{-1}(u),G^{-1}(u)}leq xleqmax{F^{-1}(u),G^{-1}(u)}bigr}$$
are in fact the same sets. This is "pure logic"; one just has to go through the motions.
add a comment |
up vote
2
down vote
up vote
2
down vote
Since you use the notation $F^{-1}$, resp. $G^{-1}$, for the quantile function you tacitly assume that $F$ and $G$ are continuous and strictly increasing on some interval $Jsubset{mathbb R}$. I suggest you draw a figure showing two reasonable such functions. The left hand side of the claimed formula then represents the unsigned area enclosed between the graphs of $F$ and $G$. Turning the figure $90^circ$ you then can verify that the right hand side of the claimed formula is the same area.
This means that one has to prove that
$$A:=bigl{(x,u)in Jtimes[0,1]bigm|min{F(x),G(x)}leq uleqmax{F(x),G(x)}bigr}$$
and
$$A':=bigl{(x,u)in Jtimes[0,1]bigm|min{F^{-1}(u),G^{-1}(u)}leq xleqmax{F^{-1}(u),G^{-1}(u)}bigr}$$
are in fact the same sets. This is "pure logic"; one just has to go through the motions.
Since you use the notation $F^{-1}$, resp. $G^{-1}$, for the quantile function you tacitly assume that $F$ and $G$ are continuous and strictly increasing on some interval $Jsubset{mathbb R}$. I suggest you draw a figure showing two reasonable such functions. The left hand side of the claimed formula then represents the unsigned area enclosed between the graphs of $F$ and $G$. Turning the figure $90^circ$ you then can verify that the right hand side of the claimed formula is the same area.
This means that one has to prove that
$$A:=bigl{(x,u)in Jtimes[0,1]bigm|min{F(x),G(x)}leq uleqmax{F(x),G(x)}bigr}$$
and
$$A':=bigl{(x,u)in Jtimes[0,1]bigm|min{F^{-1}(u),G^{-1}(u)}leq xleqmax{F^{-1}(u),G^{-1}(u)}bigr}$$
are in fact the same sets. This is "pure logic"; one just has to go through the motions.
edited Nov 17 at 16:33
answered Nov 17 at 15:20
Christian Blatter
171k7111325
171k7111325
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3002340%2fshowing-int-mathbb-r-mid-fx-gx-mid-dx-int-01-mid-f-1u-g-1%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
What is $F^{-1}=$?
– Daniel Camarena Perez
Nov 17 at 13:09
$F^{-1}$ is the quantile function it is the inverse of the CDF. In this case, since i don't have an explicit CDF i can't know ecplicitely$F^{-1}$
– Farouk Deutsch
Nov 17 at 13:19
1
I think that en.wikipedia.org/wiki/Integral_of_inverse_functions would be helpful.
– irchans
Nov 17 at 15:19
thank you i now visualizing the thing but i'm still stuck to prove it with words
– Farouk Deutsch
Nov 17 at 16:07