Question on proof of every finitely generated vector space has a basis











up vote
2
down vote

favorite












The following is a proof of the theorem given in Curtis's Linear Algebra book:




First we consider the case in which $V$ consists of the zero vector
alone. Then the zero vector spans $V$ but cannot be a basis. In this
case we agree that the empty set is a basis for $V$, so that the
dimension of $V$ is zero. Now, let $V ne { 0 }$ be a vector space
with $n$ generators. By theorem $(5.1)$ (see below) any set of $n+1$
vectors in $V$ is linearly dependent, and since set consisting of a
single nonzero vector in linearly independent, it follows that, for
some integer, $mge 1$, $V$ contains linearly independent vectors
$b_1, ldots , b_m $ such that any set of $m+1$ vectors in $V$ is
linearly independent. We prove that ${ b_1, ldots , b_m }$ is a
basis for $V$, and for this it is enough to show, for any vector $b in V$, that $b in S(b_1, ldots , b_m )$. Because of the properties
of the set ${ b_1, ldots , b_m }$, ${ b_1, ldots , b_m , b }$ is
a linearly dependent set. Since ${ b_1, ldots , b_m }$ is linearly
independent, Lemma $(7.1)$ implies that $b in S(b_1, ldots , b_m )$
and the theorem is proved.




Here are the theorems used in the proof:




$(5.1)$ Theorem. Let $S$ be a subspace of a vector space $V$ over a
field $F$, such that $S$ is generated by $n$ vectors ${ a_1 , ldots
, a_n }$
. Suppose ${ b_1 , ldots , b_m }$ are vectors in $S$, with
$m > n$. Then the vectors ${ b_1 , ldots , b_m }$ are linearly
dependent.



$(7.1)$ Lemma. If ${ a_1 , ldots , a_m}$ is a linearly dependent
and if ${ a_1 , ldots , a_{m-1} }$ is linearly independent, then
$a_m$ is a linear combination of $a_1 , ldots , a_{m-1}$.




My questions:




  • How does the empty set span ${ 0 }$? The subspace generated by ${}$ seems to be an empty set. Am I wrong here?


  • The author claims that "it follows that, for some integer, $mge 1$, $V$ contains linearly independent vectors $b_1, ldots , b_m $ such that any set of $m+1$ vectors in $V$ is linearly independent."? But how do I find such linearly independent vectors?











share|cite|improve this question


















  • 2




    As the author says: "we agree that the empty set is a basis for $V$''. In other words, it is a conventional agreement amongst mathematicians that a subset of a vector space is a basis if one of two things happens: (1) [insert traditional definition of basis here]; or (2) $V$ consists of the zero vector alone and the subset is empty.
    – Lee Mosher
    Nov 15 at 14:11






  • 1




    However, one can also deduce this from another conventional agreement: in any commutative group, the sum over the empty set is equal to the identity element of that group (and this can itself be deduced using the method suggested in the answer of @Brahadeesh).
    – Lee Mosher
    Nov 15 at 14:18

















up vote
2
down vote

favorite












The following is a proof of the theorem given in Curtis's Linear Algebra book:




First we consider the case in which $V$ consists of the zero vector
alone. Then the zero vector spans $V$ but cannot be a basis. In this
case we agree that the empty set is a basis for $V$, so that the
dimension of $V$ is zero. Now, let $V ne { 0 }$ be a vector space
with $n$ generators. By theorem $(5.1)$ (see below) any set of $n+1$
vectors in $V$ is linearly dependent, and since set consisting of a
single nonzero vector in linearly independent, it follows that, for
some integer, $mge 1$, $V$ contains linearly independent vectors
$b_1, ldots , b_m $ such that any set of $m+1$ vectors in $V$ is
linearly independent. We prove that ${ b_1, ldots , b_m }$ is a
basis for $V$, and for this it is enough to show, for any vector $b in V$, that $b in S(b_1, ldots , b_m )$. Because of the properties
of the set ${ b_1, ldots , b_m }$, ${ b_1, ldots , b_m , b }$ is
a linearly dependent set. Since ${ b_1, ldots , b_m }$ is linearly
independent, Lemma $(7.1)$ implies that $b in S(b_1, ldots , b_m )$
and the theorem is proved.




Here are the theorems used in the proof:




$(5.1)$ Theorem. Let $S$ be a subspace of a vector space $V$ over a
field $F$, such that $S$ is generated by $n$ vectors ${ a_1 , ldots
, a_n }$
. Suppose ${ b_1 , ldots , b_m }$ are vectors in $S$, with
$m > n$. Then the vectors ${ b_1 , ldots , b_m }$ are linearly
dependent.



$(7.1)$ Lemma. If ${ a_1 , ldots , a_m}$ is a linearly dependent
and if ${ a_1 , ldots , a_{m-1} }$ is linearly independent, then
$a_m$ is a linear combination of $a_1 , ldots , a_{m-1}$.




My questions:




  • How does the empty set span ${ 0 }$? The subspace generated by ${}$ seems to be an empty set. Am I wrong here?


  • The author claims that "it follows that, for some integer, $mge 1$, $V$ contains linearly independent vectors $b_1, ldots , b_m $ such that any set of $m+1$ vectors in $V$ is linearly independent."? But how do I find such linearly independent vectors?











share|cite|improve this question


















  • 2




    As the author says: "we agree that the empty set is a basis for $V$''. In other words, it is a conventional agreement amongst mathematicians that a subset of a vector space is a basis if one of two things happens: (1) [insert traditional definition of basis here]; or (2) $V$ consists of the zero vector alone and the subset is empty.
    – Lee Mosher
    Nov 15 at 14:11






  • 1




    However, one can also deduce this from another conventional agreement: in any commutative group, the sum over the empty set is equal to the identity element of that group (and this can itself be deduced using the method suggested in the answer of @Brahadeesh).
    – Lee Mosher
    Nov 15 at 14:18















up vote
2
down vote

favorite









up vote
2
down vote

favorite











The following is a proof of the theorem given in Curtis's Linear Algebra book:




First we consider the case in which $V$ consists of the zero vector
alone. Then the zero vector spans $V$ but cannot be a basis. In this
case we agree that the empty set is a basis for $V$, so that the
dimension of $V$ is zero. Now, let $V ne { 0 }$ be a vector space
with $n$ generators. By theorem $(5.1)$ (see below) any set of $n+1$
vectors in $V$ is linearly dependent, and since set consisting of a
single nonzero vector in linearly independent, it follows that, for
some integer, $mge 1$, $V$ contains linearly independent vectors
$b_1, ldots , b_m $ such that any set of $m+1$ vectors in $V$ is
linearly independent. We prove that ${ b_1, ldots , b_m }$ is a
basis for $V$, and for this it is enough to show, for any vector $b in V$, that $b in S(b_1, ldots , b_m )$. Because of the properties
of the set ${ b_1, ldots , b_m }$, ${ b_1, ldots , b_m , b }$ is
a linearly dependent set. Since ${ b_1, ldots , b_m }$ is linearly
independent, Lemma $(7.1)$ implies that $b in S(b_1, ldots , b_m )$
and the theorem is proved.




Here are the theorems used in the proof:




$(5.1)$ Theorem. Let $S$ be a subspace of a vector space $V$ over a
field $F$, such that $S$ is generated by $n$ vectors ${ a_1 , ldots
, a_n }$
. Suppose ${ b_1 , ldots , b_m }$ are vectors in $S$, with
$m > n$. Then the vectors ${ b_1 , ldots , b_m }$ are linearly
dependent.



$(7.1)$ Lemma. If ${ a_1 , ldots , a_m}$ is a linearly dependent
and if ${ a_1 , ldots , a_{m-1} }$ is linearly independent, then
$a_m$ is a linear combination of $a_1 , ldots , a_{m-1}$.




My questions:




  • How does the empty set span ${ 0 }$? The subspace generated by ${}$ seems to be an empty set. Am I wrong here?


  • The author claims that "it follows that, for some integer, $mge 1$, $V$ contains linearly independent vectors $b_1, ldots , b_m $ such that any set of $m+1$ vectors in $V$ is linearly independent."? But how do I find such linearly independent vectors?











share|cite|improve this question













The following is a proof of the theorem given in Curtis's Linear Algebra book:




First we consider the case in which $V$ consists of the zero vector
alone. Then the zero vector spans $V$ but cannot be a basis. In this
case we agree that the empty set is a basis for $V$, so that the
dimension of $V$ is zero. Now, let $V ne { 0 }$ be a vector space
with $n$ generators. By theorem $(5.1)$ (see below) any set of $n+1$
vectors in $V$ is linearly dependent, and since set consisting of a
single nonzero vector in linearly independent, it follows that, for
some integer, $mge 1$, $V$ contains linearly independent vectors
$b_1, ldots , b_m $ such that any set of $m+1$ vectors in $V$ is
linearly independent. We prove that ${ b_1, ldots , b_m }$ is a
basis for $V$, and for this it is enough to show, for any vector $b in V$, that $b in S(b_1, ldots , b_m )$. Because of the properties
of the set ${ b_1, ldots , b_m }$, ${ b_1, ldots , b_m , b }$ is
a linearly dependent set. Since ${ b_1, ldots , b_m }$ is linearly
independent, Lemma $(7.1)$ implies that $b in S(b_1, ldots , b_m )$
and the theorem is proved.




Here are the theorems used in the proof:




$(5.1)$ Theorem. Let $S$ be a subspace of a vector space $V$ over a
field $F$, such that $S$ is generated by $n$ vectors ${ a_1 , ldots
, a_n }$
. Suppose ${ b_1 , ldots , b_m }$ are vectors in $S$, with
$m > n$. Then the vectors ${ b_1 , ldots , b_m }$ are linearly
dependent.



$(7.1)$ Lemma. If ${ a_1 , ldots , a_m}$ is a linearly dependent
and if ${ a_1 , ldots , a_{m-1} }$ is linearly independent, then
$a_m$ is a linear combination of $a_1 , ldots , a_{m-1}$.




My questions:




  • How does the empty set span ${ 0 }$? The subspace generated by ${}$ seems to be an empty set. Am I wrong here?


  • The author claims that "it follows that, for some integer, $mge 1$, $V$ contains linearly independent vectors $b_1, ldots , b_m $ such that any set of $m+1$ vectors in $V$ is linearly independent."? But how do I find such linearly independent vectors?








linear-algebra proof-explanation






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Nov 15 at 14:03









Ashish K

753513




753513








  • 2




    As the author says: "we agree that the empty set is a basis for $V$''. In other words, it is a conventional agreement amongst mathematicians that a subset of a vector space is a basis if one of two things happens: (1) [insert traditional definition of basis here]; or (2) $V$ consists of the zero vector alone and the subset is empty.
    – Lee Mosher
    Nov 15 at 14:11






  • 1




    However, one can also deduce this from another conventional agreement: in any commutative group, the sum over the empty set is equal to the identity element of that group (and this can itself be deduced using the method suggested in the answer of @Brahadeesh).
    – Lee Mosher
    Nov 15 at 14:18
















  • 2




    As the author says: "we agree that the empty set is a basis for $V$''. In other words, it is a conventional agreement amongst mathematicians that a subset of a vector space is a basis if one of two things happens: (1) [insert traditional definition of basis here]; or (2) $V$ consists of the zero vector alone and the subset is empty.
    – Lee Mosher
    Nov 15 at 14:11






  • 1




    However, one can also deduce this from another conventional agreement: in any commutative group, the sum over the empty set is equal to the identity element of that group (and this can itself be deduced using the method suggested in the answer of @Brahadeesh).
    – Lee Mosher
    Nov 15 at 14:18










2




2




As the author says: "we agree that the empty set is a basis for $V$''. In other words, it is a conventional agreement amongst mathematicians that a subset of a vector space is a basis if one of two things happens: (1) [insert traditional definition of basis here]; or (2) $V$ consists of the zero vector alone and the subset is empty.
– Lee Mosher
Nov 15 at 14:11




As the author says: "we agree that the empty set is a basis for $V$''. In other words, it is a conventional agreement amongst mathematicians that a subset of a vector space is a basis if one of two things happens: (1) [insert traditional definition of basis here]; or (2) $V$ consists of the zero vector alone and the subset is empty.
– Lee Mosher
Nov 15 at 14:11




1




1




However, one can also deduce this from another conventional agreement: in any commutative group, the sum over the empty set is equal to the identity element of that group (and this can itself be deduced using the method suggested in the answer of @Brahadeesh).
– Lee Mosher
Nov 15 at 14:18






However, one can also deduce this from another conventional agreement: in any commutative group, the sum over the empty set is equal to the identity element of that group (and this can itself be deduced using the method suggested in the answer of @Brahadeesh).
– Lee Mosher
Nov 15 at 14:18












1 Answer
1






active

oldest

votes

















up vote
2
down vote



accepted










The authors state that the empty set spans the zero subspsace ${ 0 }$ by convention.



However, this really depends on your definition of subspace spanned by a set. The definition I use is the following:
the subspace spanned by a set $S subset V$ is defined to be the intersection of all subspaces of $V$ that contain $S$. That is, if $langle S rangle$ denotes the subspace spanned by $S$, then
$$
langle S rangle := bigcap_{S subset W leq V} W,
$$

where $W leq V$ indicates that $W$ is a subspace of $V$. So, if $S$ is the empty set, then the zero subspace ${ 0 }$ contains the empty set, and every vector space contains the zero subspace, so $langle emptyset rangle = { 0 }$.





For the second question, there appears to be a typo. The sentence should read:




By theorem $(5.1)$ any set of $n+1$ vectors in $V$ is linearly dependent, and since set consisting of a single nonzero vector in linearly independent, it follows that, for some integer, $m geq 1$, $V$ contains linearly independent vectors $b_1,dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.




Perhaps that should clear the confusion. To elaborate on why this corrected statement is true, proceed by contradiction:
suppose it is false that




for some integer, $m geq 1$, $V$ contains linearly independent vectors $b_1,dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.




What would this mean? This means that for each $m geq 1$, if $b_1,dots,b_m$ is any set of $m$ linearly independent vectors, then there is a vector $b_{m+1}$ such that $b_1,dots,b_{m+1}$ is also linearly independent. However, $(5.1)$ says that this is not possible for $m = n$, where $n$ is the size of the given generating set of $V$.





Edit: based on the comments requesting clarification.



I am not sure that the statement under consideration is of the form "(not P) or Q". I always prefer to reason out the negation in a step-by-step fashion rather than work with formal statements and the rules for their negation. It leads to less confusion, at least in my mind.



Now, the negation of




There exists $m geq 0$ such that ~blah~.




is




For every $m geq 0$ we have ~not blah~.




Here ~blah~ is




There exists a set of linearly independent vectors $b_1,dots,b_m$ such that ~foo~.




So, ~not blah~ is




For any set of linearly independent vectors $b_1,dots,b_m$, we have ~not foo~.




Here, ~foo~ is




Any set of $m+1$ vectors in $V$ is linearly dependent.




So, ~not foo~ is




Some set of $m+1$ vectors in $V$ is linearly independent.




So, the negation of the statement in consideration is:




For every $m geq 0$, we have that for any set of linearly independent vectors $b_1,dots,b_m$, we have some set of $m+1$ vectors in $V$ is linearly independent.




There is no loss of generality in taking the set of $m+1$ linearly independent vectors in $V$ to be of the form $b_1,dots,b_{m+1}$ because any subset of a linearly independent set is linearly independent. So, if we start with the set $b_1,dots,b_m$ of $m$ linearly independent vectors and get $v_1,dots,v_{m+1}$ a set of $m+1$ linearly independent vectors as per the claim, then in particular $v_1,dots,v_m$ is a set of $m$ linearly independent vectors such that there exists $v_{m+1}$ such that $v_1,dots,v_{m+1}$ is linearly independent. So, we might as well relabel the $v_i$'s as $b_i$'s and proceed inductively, since this does not change the proof.



Hope this helps. Feel free to reply in the comments for any clarifications.






share|cite|improve this answer























  • Thanks that helped. It seems I need to prove the equivalence of the above definition with the definition I already know
    – Ashish K
    Nov 15 at 14:37










  • @AshishK Possibly you use the definition that the subspace spanned by a set consists of the set of all finite linear combinations of those elements? It is indeed a good exercise to see that these two definitions are equivalent.
    – Brahadeesh
    Nov 15 at 14:38






  • 1




    Glad to be of help :)
    – Brahadeesh
    Nov 15 at 14:38






  • 1




    @AshishK I have added some clarification in the answer in response to your comments. Do let me know if anything is still unclear. I'll be happy to respond.
    – Brahadeesh
    Nov 17 at 5:24






  • 1




    it seems there's already a duplicate question. I completed my proof using the hints in that question. Thanks for all the help!
    – Ashish K
    Nov 17 at 16:11











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2999743%2fquestion-on-proof-of-every-finitely-generated-vector-space-has-a-basis%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
2
down vote



accepted










The authors state that the empty set spans the zero subspsace ${ 0 }$ by convention.



However, this really depends on your definition of subspace spanned by a set. The definition I use is the following:
the subspace spanned by a set $S subset V$ is defined to be the intersection of all subspaces of $V$ that contain $S$. That is, if $langle S rangle$ denotes the subspace spanned by $S$, then
$$
langle S rangle := bigcap_{S subset W leq V} W,
$$

where $W leq V$ indicates that $W$ is a subspace of $V$. So, if $S$ is the empty set, then the zero subspace ${ 0 }$ contains the empty set, and every vector space contains the zero subspace, so $langle emptyset rangle = { 0 }$.





For the second question, there appears to be a typo. The sentence should read:




By theorem $(5.1)$ any set of $n+1$ vectors in $V$ is linearly dependent, and since set consisting of a single nonzero vector in linearly independent, it follows that, for some integer, $m geq 1$, $V$ contains linearly independent vectors $b_1,dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.




Perhaps that should clear the confusion. To elaborate on why this corrected statement is true, proceed by contradiction:
suppose it is false that




for some integer, $m geq 1$, $V$ contains linearly independent vectors $b_1,dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.




What would this mean? This means that for each $m geq 1$, if $b_1,dots,b_m$ is any set of $m$ linearly independent vectors, then there is a vector $b_{m+1}$ such that $b_1,dots,b_{m+1}$ is also linearly independent. However, $(5.1)$ says that this is not possible for $m = n$, where $n$ is the size of the given generating set of $V$.





Edit: based on the comments requesting clarification.



I am not sure that the statement under consideration is of the form "(not P) or Q". I always prefer to reason out the negation in a step-by-step fashion rather than work with formal statements and the rules for their negation. It leads to less confusion, at least in my mind.



Now, the negation of




There exists $m geq 0$ such that ~blah~.




is




For every $m geq 0$ we have ~not blah~.




Here ~blah~ is




There exists a set of linearly independent vectors $b_1,dots,b_m$ such that ~foo~.




So, ~not blah~ is




For any set of linearly independent vectors $b_1,dots,b_m$, we have ~not foo~.




Here, ~foo~ is




Any set of $m+1$ vectors in $V$ is linearly dependent.




So, ~not foo~ is




Some set of $m+1$ vectors in $V$ is linearly independent.




So, the negation of the statement in consideration is:




For every $m geq 0$, we have that for any set of linearly independent vectors $b_1,dots,b_m$, we have some set of $m+1$ vectors in $V$ is linearly independent.




There is no loss of generality in taking the set of $m+1$ linearly independent vectors in $V$ to be of the form $b_1,dots,b_{m+1}$ because any subset of a linearly independent set is linearly independent. So, if we start with the set $b_1,dots,b_m$ of $m$ linearly independent vectors and get $v_1,dots,v_{m+1}$ a set of $m+1$ linearly independent vectors as per the claim, then in particular $v_1,dots,v_m$ is a set of $m$ linearly independent vectors such that there exists $v_{m+1}$ such that $v_1,dots,v_{m+1}$ is linearly independent. So, we might as well relabel the $v_i$'s as $b_i$'s and proceed inductively, since this does not change the proof.



Hope this helps. Feel free to reply in the comments for any clarifications.






share|cite|improve this answer























  • Thanks that helped. It seems I need to prove the equivalence of the above definition with the definition I already know
    – Ashish K
    Nov 15 at 14:37










  • @AshishK Possibly you use the definition that the subspace spanned by a set consists of the set of all finite linear combinations of those elements? It is indeed a good exercise to see that these two definitions are equivalent.
    – Brahadeesh
    Nov 15 at 14:38






  • 1




    Glad to be of help :)
    – Brahadeesh
    Nov 15 at 14:38






  • 1




    @AshishK I have added some clarification in the answer in response to your comments. Do let me know if anything is still unclear. I'll be happy to respond.
    – Brahadeesh
    Nov 17 at 5:24






  • 1




    it seems there's already a duplicate question. I completed my proof using the hints in that question. Thanks for all the help!
    – Ashish K
    Nov 17 at 16:11















up vote
2
down vote



accepted










The authors state that the empty set spans the zero subspsace ${ 0 }$ by convention.



However, this really depends on your definition of subspace spanned by a set. The definition I use is the following:
the subspace spanned by a set $S subset V$ is defined to be the intersection of all subspaces of $V$ that contain $S$. That is, if $langle S rangle$ denotes the subspace spanned by $S$, then
$$
langle S rangle := bigcap_{S subset W leq V} W,
$$

where $W leq V$ indicates that $W$ is a subspace of $V$. So, if $S$ is the empty set, then the zero subspace ${ 0 }$ contains the empty set, and every vector space contains the zero subspace, so $langle emptyset rangle = { 0 }$.





For the second question, there appears to be a typo. The sentence should read:




By theorem $(5.1)$ any set of $n+1$ vectors in $V$ is linearly dependent, and since set consisting of a single nonzero vector in linearly independent, it follows that, for some integer, $m geq 1$, $V$ contains linearly independent vectors $b_1,dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.




Perhaps that should clear the confusion. To elaborate on why this corrected statement is true, proceed by contradiction:
suppose it is false that




for some integer, $m geq 1$, $V$ contains linearly independent vectors $b_1,dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.




What would this mean? This means that for each $m geq 1$, if $b_1,dots,b_m$ is any set of $m$ linearly independent vectors, then there is a vector $b_{m+1}$ such that $b_1,dots,b_{m+1}$ is also linearly independent. However, $(5.1)$ says that this is not possible for $m = n$, where $n$ is the size of the given generating set of $V$.





Edit: based on the comments requesting clarification.



I am not sure that the statement under consideration is of the form "(not P) or Q". I always prefer to reason out the negation in a step-by-step fashion rather than work with formal statements and the rules for their negation. It leads to less confusion, at least in my mind.



Now, the negation of




There exists $m geq 0$ such that ~blah~.




is




For every $m geq 0$ we have ~not blah~.




Here ~blah~ is




There exists a set of linearly independent vectors $b_1,dots,b_m$ such that ~foo~.




So, ~not blah~ is




For any set of linearly independent vectors $b_1,dots,b_m$, we have ~not foo~.




Here, ~foo~ is




Any set of $m+1$ vectors in $V$ is linearly dependent.




So, ~not foo~ is




Some set of $m+1$ vectors in $V$ is linearly independent.




So, the negation of the statement in consideration is:




For every $m geq 0$, we have that for any set of linearly independent vectors $b_1,dots,b_m$, we have some set of $m+1$ vectors in $V$ is linearly independent.




There is no loss of generality in taking the set of $m+1$ linearly independent vectors in $V$ to be of the form $b_1,dots,b_{m+1}$ because any subset of a linearly independent set is linearly independent. So, if we start with the set $b_1,dots,b_m$ of $m$ linearly independent vectors and get $v_1,dots,v_{m+1}$ a set of $m+1$ linearly independent vectors as per the claim, then in particular $v_1,dots,v_m$ is a set of $m$ linearly independent vectors such that there exists $v_{m+1}$ such that $v_1,dots,v_{m+1}$ is linearly independent. So, we might as well relabel the $v_i$'s as $b_i$'s and proceed inductively, since this does not change the proof.



Hope this helps. Feel free to reply in the comments for any clarifications.






share|cite|improve this answer























  • Thanks that helped. It seems I need to prove the equivalence of the above definition with the definition I already know
    – Ashish K
    Nov 15 at 14:37










  • @AshishK Possibly you use the definition that the subspace spanned by a set consists of the set of all finite linear combinations of those elements? It is indeed a good exercise to see that these two definitions are equivalent.
    – Brahadeesh
    Nov 15 at 14:38






  • 1




    Glad to be of help :)
    – Brahadeesh
    Nov 15 at 14:38






  • 1




    @AshishK I have added some clarification in the answer in response to your comments. Do let me know if anything is still unclear. I'll be happy to respond.
    – Brahadeesh
    Nov 17 at 5:24






  • 1




    it seems there's already a duplicate question. I completed my proof using the hints in that question. Thanks for all the help!
    – Ashish K
    Nov 17 at 16:11













up vote
2
down vote



accepted







up vote
2
down vote



accepted






The authors state that the empty set spans the zero subspsace ${ 0 }$ by convention.



However, this really depends on your definition of subspace spanned by a set. The definition I use is the following:
the subspace spanned by a set $S subset V$ is defined to be the intersection of all subspaces of $V$ that contain $S$. That is, if $langle S rangle$ denotes the subspace spanned by $S$, then
$$
langle S rangle := bigcap_{S subset W leq V} W,
$$

where $W leq V$ indicates that $W$ is a subspace of $V$. So, if $S$ is the empty set, then the zero subspace ${ 0 }$ contains the empty set, and every vector space contains the zero subspace, so $langle emptyset rangle = { 0 }$.





For the second question, there appears to be a typo. The sentence should read:




By theorem $(5.1)$ any set of $n+1$ vectors in $V$ is linearly dependent, and since set consisting of a single nonzero vector in linearly independent, it follows that, for some integer, $m geq 1$, $V$ contains linearly independent vectors $b_1,dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.




Perhaps that should clear the confusion. To elaborate on why this corrected statement is true, proceed by contradiction:
suppose it is false that




for some integer, $m geq 1$, $V$ contains linearly independent vectors $b_1,dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.




What would this mean? This means that for each $m geq 1$, if $b_1,dots,b_m$ is any set of $m$ linearly independent vectors, then there is a vector $b_{m+1}$ such that $b_1,dots,b_{m+1}$ is also linearly independent. However, $(5.1)$ says that this is not possible for $m = n$, where $n$ is the size of the given generating set of $V$.





Edit: based on the comments requesting clarification.



I am not sure that the statement under consideration is of the form "(not P) or Q". I always prefer to reason out the negation in a step-by-step fashion rather than work with formal statements and the rules for their negation. It leads to less confusion, at least in my mind.



Now, the negation of




There exists $m geq 0$ such that ~blah~.




is




For every $m geq 0$ we have ~not blah~.




Here ~blah~ is




There exists a set of linearly independent vectors $b_1,dots,b_m$ such that ~foo~.




So, ~not blah~ is




For any set of linearly independent vectors $b_1,dots,b_m$, we have ~not foo~.




Here, ~foo~ is




Any set of $m+1$ vectors in $V$ is linearly dependent.




So, ~not foo~ is




Some set of $m+1$ vectors in $V$ is linearly independent.




So, the negation of the statement in consideration is:




For every $m geq 0$, we have that for any set of linearly independent vectors $b_1,dots,b_m$, we have some set of $m+1$ vectors in $V$ is linearly independent.




There is no loss of generality in taking the set of $m+1$ linearly independent vectors in $V$ to be of the form $b_1,dots,b_{m+1}$ because any subset of a linearly independent set is linearly independent. So, if we start with the set $b_1,dots,b_m$ of $m$ linearly independent vectors and get $v_1,dots,v_{m+1}$ a set of $m+1$ linearly independent vectors as per the claim, then in particular $v_1,dots,v_m$ is a set of $m$ linearly independent vectors such that there exists $v_{m+1}$ such that $v_1,dots,v_{m+1}$ is linearly independent. So, we might as well relabel the $v_i$'s as $b_i$'s and proceed inductively, since this does not change the proof.



Hope this helps. Feel free to reply in the comments for any clarifications.






share|cite|improve this answer














The authors state that the empty set spans the zero subspsace ${ 0 }$ by convention.



However, this really depends on your definition of subspace spanned by a set. The definition I use is the following:
the subspace spanned by a set $S subset V$ is defined to be the intersection of all subspaces of $V$ that contain $S$. That is, if $langle S rangle$ denotes the subspace spanned by $S$, then
$$
langle S rangle := bigcap_{S subset W leq V} W,
$$

where $W leq V$ indicates that $W$ is a subspace of $V$. So, if $S$ is the empty set, then the zero subspace ${ 0 }$ contains the empty set, and every vector space contains the zero subspace, so $langle emptyset rangle = { 0 }$.





For the second question, there appears to be a typo. The sentence should read:




By theorem $(5.1)$ any set of $n+1$ vectors in $V$ is linearly dependent, and since set consisting of a single nonzero vector in linearly independent, it follows that, for some integer, $m geq 1$, $V$ contains linearly independent vectors $b_1,dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.




Perhaps that should clear the confusion. To elaborate on why this corrected statement is true, proceed by contradiction:
suppose it is false that




for some integer, $m geq 1$, $V$ contains linearly independent vectors $b_1,dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.




What would this mean? This means that for each $m geq 1$, if $b_1,dots,b_m$ is any set of $m$ linearly independent vectors, then there is a vector $b_{m+1}$ such that $b_1,dots,b_{m+1}$ is also linearly independent. However, $(5.1)$ says that this is not possible for $m = n$, where $n$ is the size of the given generating set of $V$.





Edit: based on the comments requesting clarification.



I am not sure that the statement under consideration is of the form "(not P) or Q". I always prefer to reason out the negation in a step-by-step fashion rather than work with formal statements and the rules for their negation. It leads to less confusion, at least in my mind.



Now, the negation of




There exists $m geq 0$ such that ~blah~.




is




For every $m geq 0$ we have ~not blah~.




Here ~blah~ is




There exists a set of linearly independent vectors $b_1,dots,b_m$ such that ~foo~.




So, ~not blah~ is




For any set of linearly independent vectors $b_1,dots,b_m$, we have ~not foo~.




Here, ~foo~ is




Any set of $m+1$ vectors in $V$ is linearly dependent.




So, ~not foo~ is




Some set of $m+1$ vectors in $V$ is linearly independent.




So, the negation of the statement in consideration is:




For every $m geq 0$, we have that for any set of linearly independent vectors $b_1,dots,b_m$, we have some set of $m+1$ vectors in $V$ is linearly independent.




There is no loss of generality in taking the set of $m+1$ linearly independent vectors in $V$ to be of the form $b_1,dots,b_{m+1}$ because any subset of a linearly independent set is linearly independent. So, if we start with the set $b_1,dots,b_m$ of $m$ linearly independent vectors and get $v_1,dots,v_{m+1}$ a set of $m+1$ linearly independent vectors as per the claim, then in particular $v_1,dots,v_m$ is a set of $m$ linearly independent vectors such that there exists $v_{m+1}$ such that $v_1,dots,v_{m+1}$ is linearly independent. So, we might as well relabel the $v_i$'s as $b_i$'s and proceed inductively, since this does not change the proof.



Hope this helps. Feel free to reply in the comments for any clarifications.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Nov 17 at 5:23

























answered Nov 15 at 14:18









Brahadeesh

5,53941956




5,53941956












  • Thanks that helped. It seems I need to prove the equivalence of the above definition with the definition I already know
    – Ashish K
    Nov 15 at 14:37










  • @AshishK Possibly you use the definition that the subspace spanned by a set consists of the set of all finite linear combinations of those elements? It is indeed a good exercise to see that these two definitions are equivalent.
    – Brahadeesh
    Nov 15 at 14:38






  • 1




    Glad to be of help :)
    – Brahadeesh
    Nov 15 at 14:38






  • 1




    @AshishK I have added some clarification in the answer in response to your comments. Do let me know if anything is still unclear. I'll be happy to respond.
    – Brahadeesh
    Nov 17 at 5:24






  • 1




    it seems there's already a duplicate question. I completed my proof using the hints in that question. Thanks for all the help!
    – Ashish K
    Nov 17 at 16:11


















  • Thanks that helped. It seems I need to prove the equivalence of the above definition with the definition I already know
    – Ashish K
    Nov 15 at 14:37










  • @AshishK Possibly you use the definition that the subspace spanned by a set consists of the set of all finite linear combinations of those elements? It is indeed a good exercise to see that these two definitions are equivalent.
    – Brahadeesh
    Nov 15 at 14:38






  • 1




    Glad to be of help :)
    – Brahadeesh
    Nov 15 at 14:38






  • 1




    @AshishK I have added some clarification in the answer in response to your comments. Do let me know if anything is still unclear. I'll be happy to respond.
    – Brahadeesh
    Nov 17 at 5:24






  • 1




    it seems there's already a duplicate question. I completed my proof using the hints in that question. Thanks for all the help!
    – Ashish K
    Nov 17 at 16:11
















Thanks that helped. It seems I need to prove the equivalence of the above definition with the definition I already know
– Ashish K
Nov 15 at 14:37




Thanks that helped. It seems I need to prove the equivalence of the above definition with the definition I already know
– Ashish K
Nov 15 at 14:37












@AshishK Possibly you use the definition that the subspace spanned by a set consists of the set of all finite linear combinations of those elements? It is indeed a good exercise to see that these two definitions are equivalent.
– Brahadeesh
Nov 15 at 14:38




@AshishK Possibly you use the definition that the subspace spanned by a set consists of the set of all finite linear combinations of those elements? It is indeed a good exercise to see that these two definitions are equivalent.
– Brahadeesh
Nov 15 at 14:38




1




1




Glad to be of help :)
– Brahadeesh
Nov 15 at 14:38




Glad to be of help :)
– Brahadeesh
Nov 15 at 14:38




1




1




@AshishK I have added some clarification in the answer in response to your comments. Do let me know if anything is still unclear. I'll be happy to respond.
– Brahadeesh
Nov 17 at 5:24




@AshishK I have added some clarification in the answer in response to your comments. Do let me know if anything is still unclear. I'll be happy to respond.
– Brahadeesh
Nov 17 at 5:24




1




1




it seems there's already a duplicate question. I completed my proof using the hints in that question. Thanks for all the help!
– Ashish K
Nov 17 at 16:11




it seems there's already a duplicate question. I completed my proof using the hints in that question. Thanks for all the help!
– Ashish K
Nov 17 at 16:11


















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2999743%2fquestion-on-proof-of-every-finitely-generated-vector-space-has-a-basis%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

AnyDesk - Fatal Program Failure

How to calibrate 16:9 built-in touch-screen to a 4:3 resolution?

QoS: MAC-Priority for clients behind a repeater