Shannon Entropy of 0.922, 3 Distinct Values











up vote
14
down vote

favorite
5












Given a string of values $AAAAAAAABC$, the Shannon Entropy in log base $2$ comes to $0.922$. From what I understand, in base $2$ the Shannon Entropy rounded up is the minimum number of bits in binary to represent a single one of the values.



Taken from the introduction on this wikipedia page:



https://en.wikipedia.org/wiki/Entropy_%28information_theory%29



So, how can three values be represented by one bit? $A$ could be $1$, $B$ could be $0$; but how could you represent $C$?



Thank you in advance.










share|cite|improve this question









New contributor




Sean C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
























    up vote
    14
    down vote

    favorite
    5












    Given a string of values $AAAAAAAABC$, the Shannon Entropy in log base $2$ comes to $0.922$. From what I understand, in base $2$ the Shannon Entropy rounded up is the minimum number of bits in binary to represent a single one of the values.



    Taken from the introduction on this wikipedia page:



    https://en.wikipedia.org/wiki/Entropy_%28information_theory%29



    So, how can three values be represented by one bit? $A$ could be $1$, $B$ could be $0$; but how could you represent $C$?



    Thank you in advance.










    share|cite|improve this question









    New contributor




    Sean C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






















      up vote
      14
      down vote

      favorite
      5









      up vote
      14
      down vote

      favorite
      5






      5





      Given a string of values $AAAAAAAABC$, the Shannon Entropy in log base $2$ comes to $0.922$. From what I understand, in base $2$ the Shannon Entropy rounded up is the minimum number of bits in binary to represent a single one of the values.



      Taken from the introduction on this wikipedia page:



      https://en.wikipedia.org/wiki/Entropy_%28information_theory%29



      So, how can three values be represented by one bit? $A$ could be $1$, $B$ could be $0$; but how could you represent $C$?



      Thank you in advance.










      share|cite|improve this question









      New contributor




      Sean C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      Given a string of values $AAAAAAAABC$, the Shannon Entropy in log base $2$ comes to $0.922$. From what I understand, in base $2$ the Shannon Entropy rounded up is the minimum number of bits in binary to represent a single one of the values.



      Taken from the introduction on this wikipedia page:



      https://en.wikipedia.org/wiki/Entropy_%28information_theory%29



      So, how can three values be represented by one bit? $A$ could be $1$, $B$ could be $0$; but how could you represent $C$?



      Thank you in advance.







      information-theory mathematical-foundations entropy binary






      share|cite|improve this question









      New contributor




      Sean C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|cite|improve this question









      New contributor




      Sean C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|cite|improve this question




      share|cite|improve this question








      edited Nov 18 at 21:49









      David Richerby

      64.5k1597186




      64.5k1597186






      New contributor




      Sean C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked Nov 18 at 19:23









      Sean C

      735




      735




      New contributor




      Sean C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Sean C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Sean C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          3 Answers
          3






          active

          oldest

          votes

















          up vote
          16
          down vote



          accepted










          The entropy you've calculated isn't really for the specific string but, rather, for a random source of symbols that generates $A$ with probability $tfrac{8}{10}$, and $B$ and $C$ with probability $tfrac1{10}$ each, with no correlation between successive symbols. The calculated entropy for this distribution, $0.922$ means that you can't represent strings generated from this distribution using less than $0.922$ bits per character, on average.



          It might be quite hard to develop a code that will achieve this rate.* For example, Huffman coding would allocate codes $0$, $10$ and $11$ to $A$, $B$ and $C$, respectively, for an average of $1.2$ bits per character. That's quite far from the entropy, though still a good deal better than the naive encoding of two bits per character. Any attempt at a better coding will probably exploit the fact that even a run of ten consecutive $A$s is more likely (probability $0.107$) than a single $B$.





          * Turns out that it isn't hard to get as close as you want – see the other answers!






          share|cite|improve this answer























          • @immibis Fixed --thanks!
            – David Richerby
            Nov 19 at 22:33


















          up vote
          17
          down vote













          Here is a concrete encoding that can represent each symbol in less than 1 bit on average:



          First, split the input string into pairs of successive characters (e.g. AAAAAAAABC becomes AA|AA|AA|AA|BC). Then encode AA as 0, AB as 100, AC as 101, BA as 110, CA as 1110, BB as 111100, BC as 111101, CB as 111110, CC as 111111.
          I've not said what happens if there is an odd number of symbols, but you can just encode the last symbol using some arbitrary encoding, it doesn't really matter when the input is long.



          This is a Huffman code for the distribution of independent pairs of symbols, and corresponds to choosing $n = 2$ in Yuval's answer. Larger $n$ would lead to even better codes (approaching the Shannon entropy in the limit, as he mentioned).



          The average number of bits per symbol pair for the above encoding is
          $$frac{8}{10} cdot frac{8}{10} cdot 1 + 3 cdot frac{8}{10} cdot frac{1}{10} cdot 3 + frac{1}{10} cdot frac{8}{10} cdot 4 + 4 cdot frac{1}{10} cdot frac{1}{10} cdot 6 = 1.92$$
          i.e. $1.92/2 = 0.96$ bits per symbol, not that far from the Shannon entropy actually for such a simple encoding.






          share|cite|improve this answer








          New contributor




          nomadictype is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.

























            up vote
            12
            down vote













            Let $mathcal{D}$ be the following distribution over ${A,B,C}$: if $X sim mathcal{D}$ then $Pr[X=A] = 4/5$ and $Pr[X=B]=Pr[X=C]=1/10$.



            For each $n$ we can construct prefix codes $C_ncolon {A,B,C}^n to {0,1}^*$ such that
            $$
            lim_{ntoinfty} frac{operatorname*{mathbb{E}}_{X_1,ldots,X_n sim mathcal{D}}[C_n(X_1,ldots,X_n)]}{n} = H(mathcal{D}).
            $$



            In words, if we encode a large number of independent samples from $mathcal{D}$, then on average we need $H(mathcal{D}) approx 0.922$ bits per sample. Intuitively, the reason we can do with less than one bit is that each individual sample is quite likely to be $A$.



            This is the real meaning of entropy, and it shows that computing the "entropy" of a string $A^8BC$ is a rather pointless exercise.






            share|cite|improve this answer





















              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "419"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });






              Sean C is a new contributor. Be nice, and check out our Code of Conduct.










               

              draft saved


              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f100278%2fshannon-entropy-of-0-922-3-distinct-values%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              16
              down vote



              accepted










              The entropy you've calculated isn't really for the specific string but, rather, for a random source of symbols that generates $A$ with probability $tfrac{8}{10}$, and $B$ and $C$ with probability $tfrac1{10}$ each, with no correlation between successive symbols. The calculated entropy for this distribution, $0.922$ means that you can't represent strings generated from this distribution using less than $0.922$ bits per character, on average.



              It might be quite hard to develop a code that will achieve this rate.* For example, Huffman coding would allocate codes $0$, $10$ and $11$ to $A$, $B$ and $C$, respectively, for an average of $1.2$ bits per character. That's quite far from the entropy, though still a good deal better than the naive encoding of two bits per character. Any attempt at a better coding will probably exploit the fact that even a run of ten consecutive $A$s is more likely (probability $0.107$) than a single $B$.





              * Turns out that it isn't hard to get as close as you want – see the other answers!






              share|cite|improve this answer























              • @immibis Fixed --thanks!
                – David Richerby
                Nov 19 at 22:33















              up vote
              16
              down vote



              accepted










              The entropy you've calculated isn't really for the specific string but, rather, for a random source of symbols that generates $A$ with probability $tfrac{8}{10}$, and $B$ and $C$ with probability $tfrac1{10}$ each, with no correlation between successive symbols. The calculated entropy for this distribution, $0.922$ means that you can't represent strings generated from this distribution using less than $0.922$ bits per character, on average.



              It might be quite hard to develop a code that will achieve this rate.* For example, Huffman coding would allocate codes $0$, $10$ and $11$ to $A$, $B$ and $C$, respectively, for an average of $1.2$ bits per character. That's quite far from the entropy, though still a good deal better than the naive encoding of two bits per character. Any attempt at a better coding will probably exploit the fact that even a run of ten consecutive $A$s is more likely (probability $0.107$) than a single $B$.





              * Turns out that it isn't hard to get as close as you want – see the other answers!






              share|cite|improve this answer























              • @immibis Fixed --thanks!
                – David Richerby
                Nov 19 at 22:33













              up vote
              16
              down vote



              accepted







              up vote
              16
              down vote



              accepted






              The entropy you've calculated isn't really for the specific string but, rather, for a random source of symbols that generates $A$ with probability $tfrac{8}{10}$, and $B$ and $C$ with probability $tfrac1{10}$ each, with no correlation between successive symbols. The calculated entropy for this distribution, $0.922$ means that you can't represent strings generated from this distribution using less than $0.922$ bits per character, on average.



              It might be quite hard to develop a code that will achieve this rate.* For example, Huffman coding would allocate codes $0$, $10$ and $11$ to $A$, $B$ and $C$, respectively, for an average of $1.2$ bits per character. That's quite far from the entropy, though still a good deal better than the naive encoding of two bits per character. Any attempt at a better coding will probably exploit the fact that even a run of ten consecutive $A$s is more likely (probability $0.107$) than a single $B$.





              * Turns out that it isn't hard to get as close as you want – see the other answers!






              share|cite|improve this answer














              The entropy you've calculated isn't really for the specific string but, rather, for a random source of symbols that generates $A$ with probability $tfrac{8}{10}$, and $B$ and $C$ with probability $tfrac1{10}$ each, with no correlation between successive symbols. The calculated entropy for this distribution, $0.922$ means that you can't represent strings generated from this distribution using less than $0.922$ bits per character, on average.



              It might be quite hard to develop a code that will achieve this rate.* For example, Huffman coding would allocate codes $0$, $10$ and $11$ to $A$, $B$ and $C$, respectively, for an average of $1.2$ bits per character. That's quite far from the entropy, though still a good deal better than the naive encoding of two bits per character. Any attempt at a better coding will probably exploit the fact that even a run of ten consecutive $A$s is more likely (probability $0.107$) than a single $B$.





              * Turns out that it isn't hard to get as close as you want – see the other answers!







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Nov 19 at 22:33

























              answered Nov 18 at 21:39









              David Richerby

              64.5k1597186




              64.5k1597186












              • @immibis Fixed --thanks!
                – David Richerby
                Nov 19 at 22:33


















              • @immibis Fixed --thanks!
                – David Richerby
                Nov 19 at 22:33
















              @immibis Fixed --thanks!
              – David Richerby
              Nov 19 at 22:33




              @immibis Fixed --thanks!
              – David Richerby
              Nov 19 at 22:33










              up vote
              17
              down vote













              Here is a concrete encoding that can represent each symbol in less than 1 bit on average:



              First, split the input string into pairs of successive characters (e.g. AAAAAAAABC becomes AA|AA|AA|AA|BC). Then encode AA as 0, AB as 100, AC as 101, BA as 110, CA as 1110, BB as 111100, BC as 111101, CB as 111110, CC as 111111.
              I've not said what happens if there is an odd number of symbols, but you can just encode the last symbol using some arbitrary encoding, it doesn't really matter when the input is long.



              This is a Huffman code for the distribution of independent pairs of symbols, and corresponds to choosing $n = 2$ in Yuval's answer. Larger $n$ would lead to even better codes (approaching the Shannon entropy in the limit, as he mentioned).



              The average number of bits per symbol pair for the above encoding is
              $$frac{8}{10} cdot frac{8}{10} cdot 1 + 3 cdot frac{8}{10} cdot frac{1}{10} cdot 3 + frac{1}{10} cdot frac{8}{10} cdot 4 + 4 cdot frac{1}{10} cdot frac{1}{10} cdot 6 = 1.92$$
              i.e. $1.92/2 = 0.96$ bits per symbol, not that far from the Shannon entropy actually for such a simple encoding.






              share|cite|improve this answer








              New contributor




              nomadictype is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






















                up vote
                17
                down vote













                Here is a concrete encoding that can represent each symbol in less than 1 bit on average:



                First, split the input string into pairs of successive characters (e.g. AAAAAAAABC becomes AA|AA|AA|AA|BC). Then encode AA as 0, AB as 100, AC as 101, BA as 110, CA as 1110, BB as 111100, BC as 111101, CB as 111110, CC as 111111.
                I've not said what happens if there is an odd number of symbols, but you can just encode the last symbol using some arbitrary encoding, it doesn't really matter when the input is long.



                This is a Huffman code for the distribution of independent pairs of symbols, and corresponds to choosing $n = 2$ in Yuval's answer. Larger $n$ would lead to even better codes (approaching the Shannon entropy in the limit, as he mentioned).



                The average number of bits per symbol pair for the above encoding is
                $$frac{8}{10} cdot frac{8}{10} cdot 1 + 3 cdot frac{8}{10} cdot frac{1}{10} cdot 3 + frac{1}{10} cdot frac{8}{10} cdot 4 + 4 cdot frac{1}{10} cdot frac{1}{10} cdot 6 = 1.92$$
                i.e. $1.92/2 = 0.96$ bits per symbol, not that far from the Shannon entropy actually for such a simple encoding.






                share|cite|improve this answer








                New contributor




                nomadictype is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.




















                  up vote
                  17
                  down vote










                  up vote
                  17
                  down vote









                  Here is a concrete encoding that can represent each symbol in less than 1 bit on average:



                  First, split the input string into pairs of successive characters (e.g. AAAAAAAABC becomes AA|AA|AA|AA|BC). Then encode AA as 0, AB as 100, AC as 101, BA as 110, CA as 1110, BB as 111100, BC as 111101, CB as 111110, CC as 111111.
                  I've not said what happens if there is an odd number of symbols, but you can just encode the last symbol using some arbitrary encoding, it doesn't really matter when the input is long.



                  This is a Huffman code for the distribution of independent pairs of symbols, and corresponds to choosing $n = 2$ in Yuval's answer. Larger $n$ would lead to even better codes (approaching the Shannon entropy in the limit, as he mentioned).



                  The average number of bits per symbol pair for the above encoding is
                  $$frac{8}{10} cdot frac{8}{10} cdot 1 + 3 cdot frac{8}{10} cdot frac{1}{10} cdot 3 + frac{1}{10} cdot frac{8}{10} cdot 4 + 4 cdot frac{1}{10} cdot frac{1}{10} cdot 6 = 1.92$$
                  i.e. $1.92/2 = 0.96$ bits per symbol, not that far from the Shannon entropy actually for such a simple encoding.






                  share|cite|improve this answer








                  New contributor




                  nomadictype is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  Here is a concrete encoding that can represent each symbol in less than 1 bit on average:



                  First, split the input string into pairs of successive characters (e.g. AAAAAAAABC becomes AA|AA|AA|AA|BC). Then encode AA as 0, AB as 100, AC as 101, BA as 110, CA as 1110, BB as 111100, BC as 111101, CB as 111110, CC as 111111.
                  I've not said what happens if there is an odd number of symbols, but you can just encode the last symbol using some arbitrary encoding, it doesn't really matter when the input is long.



                  This is a Huffman code for the distribution of independent pairs of symbols, and corresponds to choosing $n = 2$ in Yuval's answer. Larger $n$ would lead to even better codes (approaching the Shannon entropy in the limit, as he mentioned).



                  The average number of bits per symbol pair for the above encoding is
                  $$frac{8}{10} cdot frac{8}{10} cdot 1 + 3 cdot frac{8}{10} cdot frac{1}{10} cdot 3 + frac{1}{10} cdot frac{8}{10} cdot 4 + 4 cdot frac{1}{10} cdot frac{1}{10} cdot 6 = 1.92$$
                  i.e. $1.92/2 = 0.96$ bits per symbol, not that far from the Shannon entropy actually for such a simple encoding.







                  share|cite|improve this answer








                  New contributor




                  nomadictype is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  share|cite|improve this answer



                  share|cite|improve this answer






                  New contributor




                  nomadictype is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  answered Nov 19 at 0:20









                  nomadictype

                  2712




                  2712




                  New contributor




                  nomadictype is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





                  New contributor





                  nomadictype is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






                  nomadictype is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






















                      up vote
                      12
                      down vote













                      Let $mathcal{D}$ be the following distribution over ${A,B,C}$: if $X sim mathcal{D}$ then $Pr[X=A] = 4/5$ and $Pr[X=B]=Pr[X=C]=1/10$.



                      For each $n$ we can construct prefix codes $C_ncolon {A,B,C}^n to {0,1}^*$ such that
                      $$
                      lim_{ntoinfty} frac{operatorname*{mathbb{E}}_{X_1,ldots,X_n sim mathcal{D}}[C_n(X_1,ldots,X_n)]}{n} = H(mathcal{D}).
                      $$



                      In words, if we encode a large number of independent samples from $mathcal{D}$, then on average we need $H(mathcal{D}) approx 0.922$ bits per sample. Intuitively, the reason we can do with less than one bit is that each individual sample is quite likely to be $A$.



                      This is the real meaning of entropy, and it shows that computing the "entropy" of a string $A^8BC$ is a rather pointless exercise.






                      share|cite|improve this answer

























                        up vote
                        12
                        down vote













                        Let $mathcal{D}$ be the following distribution over ${A,B,C}$: if $X sim mathcal{D}$ then $Pr[X=A] = 4/5$ and $Pr[X=B]=Pr[X=C]=1/10$.



                        For each $n$ we can construct prefix codes $C_ncolon {A,B,C}^n to {0,1}^*$ such that
                        $$
                        lim_{ntoinfty} frac{operatorname*{mathbb{E}}_{X_1,ldots,X_n sim mathcal{D}}[C_n(X_1,ldots,X_n)]}{n} = H(mathcal{D}).
                        $$



                        In words, if we encode a large number of independent samples from $mathcal{D}$, then on average we need $H(mathcal{D}) approx 0.922$ bits per sample. Intuitively, the reason we can do with less than one bit is that each individual sample is quite likely to be $A$.



                        This is the real meaning of entropy, and it shows that computing the "entropy" of a string $A^8BC$ is a rather pointless exercise.






                        share|cite|improve this answer























                          up vote
                          12
                          down vote










                          up vote
                          12
                          down vote









                          Let $mathcal{D}$ be the following distribution over ${A,B,C}$: if $X sim mathcal{D}$ then $Pr[X=A] = 4/5$ and $Pr[X=B]=Pr[X=C]=1/10$.



                          For each $n$ we can construct prefix codes $C_ncolon {A,B,C}^n to {0,1}^*$ such that
                          $$
                          lim_{ntoinfty} frac{operatorname*{mathbb{E}}_{X_1,ldots,X_n sim mathcal{D}}[C_n(X_1,ldots,X_n)]}{n} = H(mathcal{D}).
                          $$



                          In words, if we encode a large number of independent samples from $mathcal{D}$, then on average we need $H(mathcal{D}) approx 0.922$ bits per sample. Intuitively, the reason we can do with less than one bit is that each individual sample is quite likely to be $A$.



                          This is the real meaning of entropy, and it shows that computing the "entropy" of a string $A^8BC$ is a rather pointless exercise.






                          share|cite|improve this answer












                          Let $mathcal{D}$ be the following distribution over ${A,B,C}$: if $X sim mathcal{D}$ then $Pr[X=A] = 4/5$ and $Pr[X=B]=Pr[X=C]=1/10$.



                          For each $n$ we can construct prefix codes $C_ncolon {A,B,C}^n to {0,1}^*$ such that
                          $$
                          lim_{ntoinfty} frac{operatorname*{mathbb{E}}_{X_1,ldots,X_n sim mathcal{D}}[C_n(X_1,ldots,X_n)]}{n} = H(mathcal{D}).
                          $$



                          In words, if we encode a large number of independent samples from $mathcal{D}$, then on average we need $H(mathcal{D}) approx 0.922$ bits per sample. Intuitively, the reason we can do with less than one bit is that each individual sample is quite likely to be $A$.



                          This is the real meaning of entropy, and it shows that computing the "entropy" of a string $A^8BC$ is a rather pointless exercise.







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered Nov 18 at 21:28









                          Yuval Filmus

                          187k12176338




                          187k12176338






















                              Sean C is a new contributor. Be nice, and check out our Code of Conduct.










                               

                              draft saved


                              draft discarded


















                              Sean C is a new contributor. Be nice, and check out our Code of Conduct.













                              Sean C is a new contributor. Be nice, and check out our Code of Conduct.












                              Sean C is a new contributor. Be nice, and check out our Code of Conduct.















                               


                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f100278%2fshannon-entropy-of-0-922-3-distinct-values%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              AnyDesk - Fatal Program Failure

                              How to calibrate 16:9 built-in touch-screen to a 4:3 resolution?

                              QoS: MAC-Priority for clients behind a repeater