Proof that a matrix is invertible if and only if meets this property.
up vote
-1
down vote
favorite
We have $Ain operatorname{Mat}_n(K)$ and I am told to prove that $Ain GL_n(K)$ if and only if meets
$ABiggl(begin{matrix} x_1 \ ... \ x_n end{matrix}Biggr)=Biggl(begin{matrix} 0 \ ... \ 0 end{matrix}Biggr)$ then $x_1,...,x_n = 0$
As a tip I am told to consider $beta$ the base of a vector space $V$ with $n$ dimension and $fin operatorname{End}V)$ whose associated matrix is $A$.
I don't know how to use that "tip" to prove that.
Thanks in advance.
linear-algebra
add a comment |
up vote
-1
down vote
favorite
We have $Ain operatorname{Mat}_n(K)$ and I am told to prove that $Ain GL_n(K)$ if and only if meets
$ABiggl(begin{matrix} x_1 \ ... \ x_n end{matrix}Biggr)=Biggl(begin{matrix} 0 \ ... \ 0 end{matrix}Biggr)$ then $x_1,...,x_n = 0$
As a tip I am told to consider $beta$ the base of a vector space $V$ with $n$ dimension and $fin operatorname{End}V)$ whose associated matrix is $A$.
I don't know how to use that "tip" to prove that.
Thanks in advance.
linear-algebra
Suppose that $A$ is invertible and $Ax=0$. Then $x=A^{-1}Ax=A^{-1}(0)=0$ as required.
– Dietrich Burde
Nov 16 at 19:03
add a comment |
up vote
-1
down vote
favorite
up vote
-1
down vote
favorite
We have $Ain operatorname{Mat}_n(K)$ and I am told to prove that $Ain GL_n(K)$ if and only if meets
$ABiggl(begin{matrix} x_1 \ ... \ x_n end{matrix}Biggr)=Biggl(begin{matrix} 0 \ ... \ 0 end{matrix}Biggr)$ then $x_1,...,x_n = 0$
As a tip I am told to consider $beta$ the base of a vector space $V$ with $n$ dimension and $fin operatorname{End}V)$ whose associated matrix is $A$.
I don't know how to use that "tip" to prove that.
Thanks in advance.
linear-algebra
We have $Ain operatorname{Mat}_n(K)$ and I am told to prove that $Ain GL_n(K)$ if and only if meets
$ABiggl(begin{matrix} x_1 \ ... \ x_n end{matrix}Biggr)=Biggl(begin{matrix} 0 \ ... \ 0 end{matrix}Biggr)$ then $x_1,...,x_n = 0$
As a tip I am told to consider $beta$ the base of a vector space $V$ with $n$ dimension and $fin operatorname{End}V)$ whose associated matrix is $A$.
I don't know how to use that "tip" to prove that.
Thanks in advance.
linear-algebra
linear-algebra
edited Nov 16 at 20:20
Bernard
116k637108
116k637108
asked Nov 16 at 19:01
Andarrkor
33
33
Suppose that $A$ is invertible and $Ax=0$. Then $x=A^{-1}Ax=A^{-1}(0)=0$ as required.
– Dietrich Burde
Nov 16 at 19:03
add a comment |
Suppose that $A$ is invertible and $Ax=0$. Then $x=A^{-1}Ax=A^{-1}(0)=0$ as required.
– Dietrich Burde
Nov 16 at 19:03
Suppose that $A$ is invertible and $Ax=0$. Then $x=A^{-1}Ax=A^{-1}(0)=0$ as required.
– Dietrich Burde
Nov 16 at 19:03
Suppose that $A$ is invertible and $Ax=0$. Then $x=A^{-1}Ax=A^{-1}(0)=0$ as required.
– Dietrich Burde
Nov 16 at 19:03
add a comment |
3 Answers
3
active
oldest
votes
up vote
2
down vote
accepted
As indicated by your tip, consider the endomorphism $f$ represented by this matrix in a given basis of a $K$-vector space $V$ with dimension $n$.
Remember that $A$ is invertible if and only if $f$ is an isomorphism (more exactly, an automorphism since $f$ is an endomorphism).
Now in a finite dimensional space, $f$ is an automorphism if and only if it is injective (and also if and only if it is surjective).
Now, how do you characterise an injective linear map?
Some more details:
$f$ is injective (hence bijective) if and only if $ker f={0}$, i.e. if and only if
$$(f(v)=0)implies (v=0).$$
What is the relation between $f(v)$ and $A$?
A linear transformation is injective when $Ker f = {0_v}$. But I do not know how to get to the matrix form from there.
– Andarrkor
Nov 16 at 20:34
I've added some details. Is that clearer now?
– Bernard
Nov 16 at 20:42
Sorry for asking so many questions, but there is something that I don't get. To get the images of the linear transformation $f$, being $v$ any vector of $V$ and ${v_1,...,v_n}$ a base of V we do. $f(v) = (v_1 ... v_n)Abegin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ being $begin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ the coordinates of $v$ with the base ${v_1,...,v_n}$. I don't understand which is the relation between that and $Ax=0$.
– Andarrkor
Nov 16 at 21:02
Let's call $(x_1, x_2,dots, x_n)$ the (unknown) coordinates of a vector $v$ in the kernel. Then $f(v)=Abegin{bmatrix}x_1\x_2\vdots\x_nend{bmatrix}$. So $vinker f$ means the coordinates of $v$ satisfy the linear system of equations defined by $A$.
– Bernard
Nov 16 at 21:09
Thanks for answering. But what happens with the matrix with the base vectors $(v_1...v_n)$ that are also in the multiplication?
– Andarrkor
Nov 17 at 1:34
add a comment |
up vote
0
down vote
To get you started: A matrix operation is invertible if it is injective and surjective. If $Ax=0$ for some non-zero vector $x$, then $A$ is not injective, so it is not invertible. The contrapositive is that $A$ being invertible implies $Ax=0$ only for $x=0$.
add a comment |
up vote
0
down vote
If $A$ is invertible, then using $Rank-nullity$ $theorem$, we can proove that $text{Ker}(A) = {0}$, so if $A(x) = 0$ then $x=0$.
Reciprocally, if $A$ meets the condition:
$ABiggl(begin{matrix} x_1 \ ... \ x_n end{matrix}Biggr)=Biggl(begin{matrix} 0 \ ... \ 0 end{matrix}Biggr)$ then $x_1,...,x_n = 0$, it means that $text{Ker}(A) = {0}$. Using again the $Rank-nullity$ $theorem$, we can prove that $text{Rank}(A) = n$, which ensures us that $A$ is invertible.
We have learnt the rank-nullity theorem just for lineal applications and not for matrices. So I have to use the linear aplication whose associated matrix is A to prove that. I know that there is a link between them as matrices and linear applications are isomorphic but I do not know how to "pass" from matrices to linear applications and vice versa. Hope you understood my problem.
– Andarrkor
Nov 16 at 19:56
add a comment |
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
As indicated by your tip, consider the endomorphism $f$ represented by this matrix in a given basis of a $K$-vector space $V$ with dimension $n$.
Remember that $A$ is invertible if and only if $f$ is an isomorphism (more exactly, an automorphism since $f$ is an endomorphism).
Now in a finite dimensional space, $f$ is an automorphism if and only if it is injective (and also if and only if it is surjective).
Now, how do you characterise an injective linear map?
Some more details:
$f$ is injective (hence bijective) if and only if $ker f={0}$, i.e. if and only if
$$(f(v)=0)implies (v=0).$$
What is the relation between $f(v)$ and $A$?
A linear transformation is injective when $Ker f = {0_v}$. But I do not know how to get to the matrix form from there.
– Andarrkor
Nov 16 at 20:34
I've added some details. Is that clearer now?
– Bernard
Nov 16 at 20:42
Sorry for asking so many questions, but there is something that I don't get. To get the images of the linear transformation $f$, being $v$ any vector of $V$ and ${v_1,...,v_n}$ a base of V we do. $f(v) = (v_1 ... v_n)Abegin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ being $begin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ the coordinates of $v$ with the base ${v_1,...,v_n}$. I don't understand which is the relation between that and $Ax=0$.
– Andarrkor
Nov 16 at 21:02
Let's call $(x_1, x_2,dots, x_n)$ the (unknown) coordinates of a vector $v$ in the kernel. Then $f(v)=Abegin{bmatrix}x_1\x_2\vdots\x_nend{bmatrix}$. So $vinker f$ means the coordinates of $v$ satisfy the linear system of equations defined by $A$.
– Bernard
Nov 16 at 21:09
Thanks for answering. But what happens with the matrix with the base vectors $(v_1...v_n)$ that are also in the multiplication?
– Andarrkor
Nov 17 at 1:34
add a comment |
up vote
2
down vote
accepted
As indicated by your tip, consider the endomorphism $f$ represented by this matrix in a given basis of a $K$-vector space $V$ with dimension $n$.
Remember that $A$ is invertible if and only if $f$ is an isomorphism (more exactly, an automorphism since $f$ is an endomorphism).
Now in a finite dimensional space, $f$ is an automorphism if and only if it is injective (and also if and only if it is surjective).
Now, how do you characterise an injective linear map?
Some more details:
$f$ is injective (hence bijective) if and only if $ker f={0}$, i.e. if and only if
$$(f(v)=0)implies (v=0).$$
What is the relation between $f(v)$ and $A$?
A linear transformation is injective when $Ker f = {0_v}$. But I do not know how to get to the matrix form from there.
– Andarrkor
Nov 16 at 20:34
I've added some details. Is that clearer now?
– Bernard
Nov 16 at 20:42
Sorry for asking so many questions, but there is something that I don't get. To get the images of the linear transformation $f$, being $v$ any vector of $V$ and ${v_1,...,v_n}$ a base of V we do. $f(v) = (v_1 ... v_n)Abegin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ being $begin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ the coordinates of $v$ with the base ${v_1,...,v_n}$. I don't understand which is the relation between that and $Ax=0$.
– Andarrkor
Nov 16 at 21:02
Let's call $(x_1, x_2,dots, x_n)$ the (unknown) coordinates of a vector $v$ in the kernel. Then $f(v)=Abegin{bmatrix}x_1\x_2\vdots\x_nend{bmatrix}$. So $vinker f$ means the coordinates of $v$ satisfy the linear system of equations defined by $A$.
– Bernard
Nov 16 at 21:09
Thanks for answering. But what happens with the matrix with the base vectors $(v_1...v_n)$ that are also in the multiplication?
– Andarrkor
Nov 17 at 1:34
add a comment |
up vote
2
down vote
accepted
up vote
2
down vote
accepted
As indicated by your tip, consider the endomorphism $f$ represented by this matrix in a given basis of a $K$-vector space $V$ with dimension $n$.
Remember that $A$ is invertible if and only if $f$ is an isomorphism (more exactly, an automorphism since $f$ is an endomorphism).
Now in a finite dimensional space, $f$ is an automorphism if and only if it is injective (and also if and only if it is surjective).
Now, how do you characterise an injective linear map?
Some more details:
$f$ is injective (hence bijective) if and only if $ker f={0}$, i.e. if and only if
$$(f(v)=0)implies (v=0).$$
What is the relation between $f(v)$ and $A$?
As indicated by your tip, consider the endomorphism $f$ represented by this matrix in a given basis of a $K$-vector space $V$ with dimension $n$.
Remember that $A$ is invertible if and only if $f$ is an isomorphism (more exactly, an automorphism since $f$ is an endomorphism).
Now in a finite dimensional space, $f$ is an automorphism if and only if it is injective (and also if and only if it is surjective).
Now, how do you characterise an injective linear map?
Some more details:
$f$ is injective (hence bijective) if and only if $ker f={0}$, i.e. if and only if
$$(f(v)=0)implies (v=0).$$
What is the relation between $f(v)$ and $A$?
edited Nov 17 at 19:53
answered Nov 16 at 20:27
Bernard
116k637108
116k637108
A linear transformation is injective when $Ker f = {0_v}$. But I do not know how to get to the matrix form from there.
– Andarrkor
Nov 16 at 20:34
I've added some details. Is that clearer now?
– Bernard
Nov 16 at 20:42
Sorry for asking so many questions, but there is something that I don't get. To get the images of the linear transformation $f$, being $v$ any vector of $V$ and ${v_1,...,v_n}$ a base of V we do. $f(v) = (v_1 ... v_n)Abegin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ being $begin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ the coordinates of $v$ with the base ${v_1,...,v_n}$. I don't understand which is the relation between that and $Ax=0$.
– Andarrkor
Nov 16 at 21:02
Let's call $(x_1, x_2,dots, x_n)$ the (unknown) coordinates of a vector $v$ in the kernel. Then $f(v)=Abegin{bmatrix}x_1\x_2\vdots\x_nend{bmatrix}$. So $vinker f$ means the coordinates of $v$ satisfy the linear system of equations defined by $A$.
– Bernard
Nov 16 at 21:09
Thanks for answering. But what happens with the matrix with the base vectors $(v_1...v_n)$ that are also in the multiplication?
– Andarrkor
Nov 17 at 1:34
add a comment |
A linear transformation is injective when $Ker f = {0_v}$. But I do not know how to get to the matrix form from there.
– Andarrkor
Nov 16 at 20:34
I've added some details. Is that clearer now?
– Bernard
Nov 16 at 20:42
Sorry for asking so many questions, but there is something that I don't get. To get the images of the linear transformation $f$, being $v$ any vector of $V$ and ${v_1,...,v_n}$ a base of V we do. $f(v) = (v_1 ... v_n)Abegin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ being $begin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ the coordinates of $v$ with the base ${v_1,...,v_n}$. I don't understand which is the relation between that and $Ax=0$.
– Andarrkor
Nov 16 at 21:02
Let's call $(x_1, x_2,dots, x_n)$ the (unknown) coordinates of a vector $v$ in the kernel. Then $f(v)=Abegin{bmatrix}x_1\x_2\vdots\x_nend{bmatrix}$. So $vinker f$ means the coordinates of $v$ satisfy the linear system of equations defined by $A$.
– Bernard
Nov 16 at 21:09
Thanks for answering. But what happens with the matrix with the base vectors $(v_1...v_n)$ that are also in the multiplication?
– Andarrkor
Nov 17 at 1:34
A linear transformation is injective when $Ker f = {0_v}$. But I do not know how to get to the matrix form from there.
– Andarrkor
Nov 16 at 20:34
A linear transformation is injective when $Ker f = {0_v}$. But I do not know how to get to the matrix form from there.
– Andarrkor
Nov 16 at 20:34
I've added some details. Is that clearer now?
– Bernard
Nov 16 at 20:42
I've added some details. Is that clearer now?
– Bernard
Nov 16 at 20:42
Sorry for asking so many questions, but there is something that I don't get. To get the images of the linear transformation $f$, being $v$ any vector of $V$ and ${v_1,...,v_n}$ a base of V we do. $f(v) = (v_1 ... v_n)Abegin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ being $begin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ the coordinates of $v$ with the base ${v_1,...,v_n}$. I don't understand which is the relation between that and $Ax=0$.
– Andarrkor
Nov 16 at 21:02
Sorry for asking so many questions, but there is something that I don't get. To get the images of the linear transformation $f$, being $v$ any vector of $V$ and ${v_1,...,v_n}$ a base of V we do. $f(v) = (v_1 ... v_n)Abegin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ being $begin{bmatrix}lambda_1\...\lambda_nend{bmatrix}$ the coordinates of $v$ with the base ${v_1,...,v_n}$. I don't understand which is the relation between that and $Ax=0$.
– Andarrkor
Nov 16 at 21:02
Let's call $(x_1, x_2,dots, x_n)$ the (unknown) coordinates of a vector $v$ in the kernel. Then $f(v)=Abegin{bmatrix}x_1\x_2\vdots\x_nend{bmatrix}$. So $vinker f$ means the coordinates of $v$ satisfy the linear system of equations defined by $A$.
– Bernard
Nov 16 at 21:09
Let's call $(x_1, x_2,dots, x_n)$ the (unknown) coordinates of a vector $v$ in the kernel. Then $f(v)=Abegin{bmatrix}x_1\x_2\vdots\x_nend{bmatrix}$. So $vinker f$ means the coordinates of $v$ satisfy the linear system of equations defined by $A$.
– Bernard
Nov 16 at 21:09
Thanks for answering. But what happens with the matrix with the base vectors $(v_1...v_n)$ that are also in the multiplication?
– Andarrkor
Nov 17 at 1:34
Thanks for answering. But what happens with the matrix with the base vectors $(v_1...v_n)$ that are also in the multiplication?
– Andarrkor
Nov 17 at 1:34
add a comment |
up vote
0
down vote
To get you started: A matrix operation is invertible if it is injective and surjective. If $Ax=0$ for some non-zero vector $x$, then $A$ is not injective, so it is not invertible. The contrapositive is that $A$ being invertible implies $Ax=0$ only for $x=0$.
add a comment |
up vote
0
down vote
To get you started: A matrix operation is invertible if it is injective and surjective. If $Ax=0$ for some non-zero vector $x$, then $A$ is not injective, so it is not invertible. The contrapositive is that $A$ being invertible implies $Ax=0$ only for $x=0$.
add a comment |
up vote
0
down vote
up vote
0
down vote
To get you started: A matrix operation is invertible if it is injective and surjective. If $Ax=0$ for some non-zero vector $x$, then $A$ is not injective, so it is not invertible. The contrapositive is that $A$ being invertible implies $Ax=0$ only for $x=0$.
To get you started: A matrix operation is invertible if it is injective and surjective. If $Ax=0$ for some non-zero vector $x$, then $A$ is not injective, so it is not invertible. The contrapositive is that $A$ being invertible implies $Ax=0$ only for $x=0$.
answered Nov 16 at 19:16
The Count
2,30361431
2,30361431
add a comment |
add a comment |
up vote
0
down vote
If $A$ is invertible, then using $Rank-nullity$ $theorem$, we can proove that $text{Ker}(A) = {0}$, so if $A(x) = 0$ then $x=0$.
Reciprocally, if $A$ meets the condition:
$ABiggl(begin{matrix} x_1 \ ... \ x_n end{matrix}Biggr)=Biggl(begin{matrix} 0 \ ... \ 0 end{matrix}Biggr)$ then $x_1,...,x_n = 0$, it means that $text{Ker}(A) = {0}$. Using again the $Rank-nullity$ $theorem$, we can prove that $text{Rank}(A) = n$, which ensures us that $A$ is invertible.
We have learnt the rank-nullity theorem just for lineal applications and not for matrices. So I have to use the linear aplication whose associated matrix is A to prove that. I know that there is a link between them as matrices and linear applications are isomorphic but I do not know how to "pass" from matrices to linear applications and vice versa. Hope you understood my problem.
– Andarrkor
Nov 16 at 19:56
add a comment |
up vote
0
down vote
If $A$ is invertible, then using $Rank-nullity$ $theorem$, we can proove that $text{Ker}(A) = {0}$, so if $A(x) = 0$ then $x=0$.
Reciprocally, if $A$ meets the condition:
$ABiggl(begin{matrix} x_1 \ ... \ x_n end{matrix}Biggr)=Biggl(begin{matrix} 0 \ ... \ 0 end{matrix}Biggr)$ then $x_1,...,x_n = 0$, it means that $text{Ker}(A) = {0}$. Using again the $Rank-nullity$ $theorem$, we can prove that $text{Rank}(A) = n$, which ensures us that $A$ is invertible.
We have learnt the rank-nullity theorem just for lineal applications and not for matrices. So I have to use the linear aplication whose associated matrix is A to prove that. I know that there is a link between them as matrices and linear applications are isomorphic but I do not know how to "pass" from matrices to linear applications and vice versa. Hope you understood my problem.
– Andarrkor
Nov 16 at 19:56
add a comment |
up vote
0
down vote
up vote
0
down vote
If $A$ is invertible, then using $Rank-nullity$ $theorem$, we can proove that $text{Ker}(A) = {0}$, so if $A(x) = 0$ then $x=0$.
Reciprocally, if $A$ meets the condition:
$ABiggl(begin{matrix} x_1 \ ... \ x_n end{matrix}Biggr)=Biggl(begin{matrix} 0 \ ... \ 0 end{matrix}Biggr)$ then $x_1,...,x_n = 0$, it means that $text{Ker}(A) = {0}$. Using again the $Rank-nullity$ $theorem$, we can prove that $text{Rank}(A) = n$, which ensures us that $A$ is invertible.
If $A$ is invertible, then using $Rank-nullity$ $theorem$, we can proove that $text{Ker}(A) = {0}$, so if $A(x) = 0$ then $x=0$.
Reciprocally, if $A$ meets the condition:
$ABiggl(begin{matrix} x_1 \ ... \ x_n end{matrix}Biggr)=Biggl(begin{matrix} 0 \ ... \ 0 end{matrix}Biggr)$ then $x_1,...,x_n = 0$, it means that $text{Ker}(A) = {0}$. Using again the $Rank-nullity$ $theorem$, we can prove that $text{Rank}(A) = n$, which ensures us that $A$ is invertible.
answered Nov 16 at 19:40
Euler Pythagoras
3619
3619
We have learnt the rank-nullity theorem just for lineal applications and not for matrices. So I have to use the linear aplication whose associated matrix is A to prove that. I know that there is a link between them as matrices and linear applications are isomorphic but I do not know how to "pass" from matrices to linear applications and vice versa. Hope you understood my problem.
– Andarrkor
Nov 16 at 19:56
add a comment |
We have learnt the rank-nullity theorem just for lineal applications and not for matrices. So I have to use the linear aplication whose associated matrix is A to prove that. I know that there is a link between them as matrices and linear applications are isomorphic but I do not know how to "pass" from matrices to linear applications and vice versa. Hope you understood my problem.
– Andarrkor
Nov 16 at 19:56
We have learnt the rank-nullity theorem just for lineal applications and not for matrices. So I have to use the linear aplication whose associated matrix is A to prove that. I know that there is a link between them as matrices and linear applications are isomorphic but I do not know how to "pass" from matrices to linear applications and vice versa. Hope you understood my problem.
– Andarrkor
Nov 16 at 19:56
We have learnt the rank-nullity theorem just for lineal applications and not for matrices. So I have to use the linear aplication whose associated matrix is A to prove that. I know that there is a link between them as matrices and linear applications are isomorphic but I do not know how to "pass" from matrices to linear applications and vice versa. Hope you understood my problem.
– Andarrkor
Nov 16 at 19:56
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3001517%2fproof-that-a-matrix-is-invertible-if-and-only-if-meets-this-property%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Suppose that $A$ is invertible and $Ax=0$. Then $x=A^{-1}Ax=A^{-1}(0)=0$ as required.
– Dietrich Burde
Nov 16 at 19:03