What is the most reasonable way for non-binary computers to have become standard?
up vote
33
down vote
favorite
Let us assume planet Earth, with a history similar to ours. Except, the result of the computer revolution is not a computer system based on binary (i.e. 0 and 1), but some other system. This system could be digital, with more than two digits, or otherwise.
Transistors were invented in this alternate timeline, in the 1950s. Any other technology that was invented can be shaped to favor a non-binary computing system.
What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?
technology alternate-history computers
|
show 15 more comments
up vote
33
down vote
favorite
Let us assume planet Earth, with a history similar to ours. Except, the result of the computer revolution is not a computer system based on binary (i.e. 0 and 1), but some other system. This system could be digital, with more than two digits, or otherwise.
Transistors were invented in this alternate timeline, in the 1950s. Any other technology that was invented can be shaped to favor a non-binary computing system.
What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?
technology alternate-history computers
11
Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
– RonJohn
Nov 26 at 15:00
7
Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
– OrangeDog
Nov 26 at 15:25
2
The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
– Aaron F
Nov 26 at 16:20
5
We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
– J...
Nov 26 at 17:03
5
Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
– Bert Haddad
2 days ago
|
show 15 more comments
up vote
33
down vote
favorite
up vote
33
down vote
favorite
Let us assume planet Earth, with a history similar to ours. Except, the result of the computer revolution is not a computer system based on binary (i.e. 0 and 1), but some other system. This system could be digital, with more than two digits, or otherwise.
Transistors were invented in this alternate timeline, in the 1950s. Any other technology that was invented can be shaped to favor a non-binary computing system.
What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?
technology alternate-history computers
Let us assume planet Earth, with a history similar to ours. Except, the result of the computer revolution is not a computer system based on binary (i.e. 0 and 1), but some other system. This system could be digital, with more than two digits, or otherwise.
Transistors were invented in this alternate timeline, in the 1950s. Any other technology that was invented can be shaped to favor a non-binary computing system.
What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?
technology alternate-history computers
technology alternate-history computers
asked Nov 26 at 13:22
kingledion
72.3k26244427
72.3k26244427
11
Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
– RonJohn
Nov 26 at 15:00
7
Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
– OrangeDog
Nov 26 at 15:25
2
The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
– Aaron F
Nov 26 at 16:20
5
We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
– J...
Nov 26 at 17:03
5
Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
– Bert Haddad
2 days ago
|
show 15 more comments
11
Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
– RonJohn
Nov 26 at 15:00
7
Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
– OrangeDog
Nov 26 at 15:25
2
The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
– Aaron F
Nov 26 at 16:20
5
We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
– J...
Nov 26 at 17:03
5
Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
– Bert Haddad
2 days ago
11
11
Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
– RonJohn
Nov 26 at 15:00
Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
– RonJohn
Nov 26 at 15:00
7
7
Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
– OrangeDog
Nov 26 at 15:25
Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
– OrangeDog
Nov 26 at 15:25
2
2
The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
– Aaron F
Nov 26 at 16:20
The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
– Aaron F
Nov 26 at 16:20
5
5
We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
– J...
Nov 26 at 17:03
We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
– J...
Nov 26 at 17:03
5
5
Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
– Bert Haddad
2 days ago
Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
– Bert Haddad
2 days ago
|
show 15 more comments
23 Answers
23
active
oldest
votes
up vote
42
down vote
Non binary computers, in particular ternary computers, have been built in the past (emphasis mine).
One early calculating machine, built by Thomas Fowler entirely from wood in 1840, operated in balanced ternary. The first modern, electronic ternary computer Setun was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.
If you want to make ternary computer the standards, I think you should leverage on those advantages: make energy more expensive, so that saving energy is a big advantage, and make production more expensive.
Note that, since smelting silicon is an energy intensive activity, already increasing the cost of energy will indirectly affect the production costs.
26
L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
– chasly from UK
Nov 26 at 14:00
7
It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
– Artemijs Danilovs
Nov 26 at 15:42
19
From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
– Tangurena
Nov 26 at 16:40
11
Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
– J...
Nov 26 at 18:22
3
@Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
– leftaroundabout
Nov 27 at 13:14
|
show 8 more comments
up vote
34
down vote
Instead of avoiding it, transcend binary:
Either let the evolution of technology take its course and somehow create a demand for non-binary processors. Analogous to what is happening now in the crypto currency scene: The developers of IOTA based their project on a ternary architecture model and are even working on a ternary processor (JINN).
Or let aggressive patenting and licensing in the early stages of binary processors (e.g. a general patent for binary processors due to lobbying or misjudgements in the patent office) be the cause for starting work on non-binary processors with less restrictive and more collaborative patents.
Patentability requirements are: novelty, usefulness, and non-obviousness1.
[the] nonobviousness principle asks whether the invention is an
adequate distance beyond or above the state of the art2
So this could be used to have a patent granted on binary processors. And even if it was an illegitimate patent, that would be revoked in future lawsuits, this situation could give rise to non-binary processors.
New contributor
23
You should focus on that second point and expand it more, that sounds interesting.
– kingledion
Nov 26 at 14:41
Free/open hardware doesn't get monetized very well.
– RonJohn
Nov 26 at 14:42
@RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
– mike
Nov 26 at 14:45
1
Advanced quantum computers could be a good choice for option one.
– Vaelus
Nov 26 at 15:54
2
@JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
– Vaelus
Nov 26 at 16:37
|
show 2 more comments
up vote
29
down vote
I would like to advance the idea of an analog computer.
Analog computers are something like the holy grail of electronics. They have the potential of nearly infinite more computing power, limited only by the voltgae or current measuring discriminator (i.e., the precision of measuring an electric state or condition).
The reason we don't have them is because using transistors in their switching mode is simple. Simple, simple, simple. So simple, that defaulting everything to the lowest common denominator (binary, single-variable logic) was obvious.
But even today, change is coming.
Analog computing, which was the predominant form of high-performance computing well into the 1970s, has largely been forgotten since today's stored program digital computers took over. But the time is ripe to change this. (Source)
If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.
...
They were complex, quirky machines, requiring specially trained personnel to understand and run them—a fact that played a role in their demise.
Another factor in their downfall was that by the 1960s digital computers were making large strides, thanks to their many advantages: straightforward programmability, algorithmic operation, ease of storage, high precision, and an ability to handle problems of any size, given enough time. (Source)
But, how to get there without getting hung up on the digital world?
A breakthrough in discrimination. Transistors, for all their value, are only as good as their manufacturing process. The more precisely constructed the transistor, the more precise the voltage measurement can be. The more precise the voltage measurement, the greater the programatic value of a change in voltage = faster computing and (best of all for most space applications) faster reaction to the environment.
Breakthrough in modeling equations. Digital computers are, by comparison, trivial to program (hence, BASIC). Their inefficiency is irrelevant compared to their ease of use. However, this is because double-integration is a whomping difficult thing to do on paper, much less to describe such that a machine can process it. But, what if we could have languages like Wolfram, R, or Haskell without having to go through the digital revolution of BASIC, PASCAL, FORTRAN, and C first? Our view of programming is very much based on how we perceive (or are influenced by) the nature of computation. Had someone come up with an efficient and flexible mathematical language before the discovery of switching transistors... the world would have changed forever.
Would this entirely remove digital from the picture?
Heck, no. That's like saying the development of a practical Lamborghini (if the word practical can ever be applied to a Lamborghini) before, say, the Edsel would mean we would have never seen the Datsun B210. The single biggest weakness of analog computing is the human-to-machine interface. The ability to compute in real time rather than through a series of discrete often barely related steps is how our brains work — but that doesn't translate well to telling a machine how to do its job. The odds are good that a hybrid machine (digital interface to an analog core) would be the final solution (as it will today). Is this germain to your question? Not particularly.
Conclusion
Two breakthroughs: one in transistor manufacture and the other in symbolic programming, are all that would be needed to advance analog computation with all of its limitless computational power over digital computing.
It's happening, although slowly: scientificamerican.com/article/…
– Jan Dorniak
Nov 26 at 20:39
8
If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
– AShelly
Nov 26 at 23:40
2
The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
– MSalters
Nov 27 at 15:35
The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
– BentNielsen
yesterday
add a comment |
up vote
14
down vote
Having thought about this and looked at L.Dutch's answer, I may withdraw my original answer (or leave it just for interest).
Instead I will give a political answer.
As mentioned by L.Dutch, the Soviets came up with a ternary system (see below). Because of the limited use of the Russian language throughout the world the Soviets often resented the fact that US scientific papers got more credence - after all English is the Lingua Franca of science. (This is true by the way, not a fiction, I'll look for references).
Suppose the Russians had won a war over the West. It was common in Soviet Russia for science to be heavily politicised (again I'll look for references). Therefore, regardless of the validity of a non-binary system the Russians could have mandated ternary or some other base simply as a form of triumphalism.
Note - I'm chickening out of finding references at the moment. I've found some but they involve delving into Marxist doctrine or buying an expensive book. My personal knowledge of the situation came from talking to a British scientist who was digging through old Russian papers looking for bits that had been missed or had been distorted by doctrine. Maybe I'll delve further but not right now.
The first modern, electronic ternary computer Setun was built in 1958
in the Soviet Union at the Moscow State University by Nikolay
Brusentsov
https://en.wikipedia.org/wiki/Ternary_computer
This would hardly be a minimal change.
– mike
Nov 26 at 14:56
3
@mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
– chasly from UK
Nov 26 at 14:58
3
I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
– mike
Nov 26 at 15:12
A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
– 0something0
11 hours ago
add a comment |
up vote
11
down vote
A ternary system would be preferred in a world where data storage cost exceeds all other cost considerations in computers. This preference would be due to radix economy, which essentially quantifies the relative cost of storing numbers in a particular numbering system. Euler's number e ≈ 2.718 has the lowest radix economy. Among integers, 3 has the lowest radix economy, lower than 2 and 4 (which have the same).
If the first storage medium used for computing would have stored ternary digits for less or just slightly more cost than binary digits, and if processing cost would have been insignificant compared to storage cost, ternary computing might have become the dominant standard. The advantage of ternary systems is small (around 5 percent), but could be important if storage cost was a serious consideration.
Binary computers dominate today mostly because electricity was the first effective medium to store and process numbers, and a single threshold voltage to distinguish between two states is easier to manage than two or more thresholds for three or more states.
Build your transistors in a medium that can store and process ternary digits efficiently, and emphasize on the high cost of storing. A mechanical example would be a switch that can take three positions in a triangle.
New contributor
Very interesting!
– kingledion
Nov 27 at 15:11
add a comment |
up vote
10
down vote
Toolforger has one thing right: Binary computers are the most efficient computing devices possible. Period. Ternary has no technological advantage, whatsoever.
However, I'm going to give a suggestion of how you can offset the disadvantage of ternary computing, to allow your society to actually use ternary computers instead of binary ones:
Your society has evolved to use a balanced numeral system.
Balanced numeral systems don't just use positive digits like we do, they use an equal number of negative and positive digits. As such, balanced ternary uses three digits for -1, 0, and 1 instead of the unbalanced 0, 1, and 2. This has several beneficial consequences:
Balanced numeral systems have symmetries that unbalanced systems lack. Not only can you exploit commutativity when doing calculations (you know what 2+3 is, so you know what 3+2 is), but also symmetries based on sign: -3-2 = -(3+2), -3*2 = 3*-2, -3*-2 = 3*2, and 3*-2 = -(3*2).
You have more computations with trivial outcome: x+(-x) = 0 and -1*x = -x.
The effect is, that you have much less to learn when learning balanced numeral systems. For instance, unbalanced decimal requires you to learn 81 data points by heart to perform all the four basic computations, whereas balanced nonal (9 digits from -4 to 4) requires only 31 data points, of which only 6 are for multiplication. The right-most column uses `-4 = d, -3 = c, -2 = b, and -1 = a as negative digits:
2*2 = 0*9 +4 = 4
2*3 = 1*9 -3 = 1c
2*4 = 1*9 -1 = 1a
3*3 = 1*9 +0 = 10
3*4 = 1*9 +3 = 13
4*4 = 2*9 -2 = 2b
The entire rest is either trivial or follows from symmetries. That's all the multiplication table your school kids need to learn!
Because you can get both positive and negative carries, you get much less and smaller carries in long additions. They simply tend to cancel each other out.
Because you have negative digits as well as positive ones, negative numbers are just an integral part of the system. In decimal, you have to decide which number is greater when doing a subtraction, then subtract the smaller number from the larger one, then reattach a sign to the result based on which of the two numbers was greater. In balanced systems you don't care which number is greater, you just do the subtraction. Then you look at the result and see whether it's positive or negative...
As a matter of fact, I once learned to use balanced nonal just for fun, and in general, it's indeed much easier to use than decimal.
My point is: To anyone who has been brought up calculating in a balanced numeral system, an unbalanced system would just feel so unimaginable awkward and cumbersome that they will basically think that ternary is the smallest base you can use. Because binary lacks the negative digits, how are you supposed to compute with that? What do you do when you subtract 5 from 2? You absolutely need a -1 for that!
As such, a society of people with a balanced numeral system background may conceivably settle on balanced ternary computers instead of binary ones. And once a chunk of nine balanced ternary digits has been generally accepted as the smallest unit of information exchange, no one will want to use 15 bits (what an awkward number!) to transmit the same amount of information in a binary fashion, with all the losses that would imply.
The result is basically a lock-in effect to balanced ternary that would keep people from using binary hardware.
Aside: Unbalanced decimal vs. balanced nonal
Here is a more detailed comparison between decimal and balanced nonal. I'm using a, b, c, d
as the negative digits -1, -2, -3, -4
here, respectively:
Negation
Here the learing effort for decimal is zero. For balanced nonal, you have to learn the following table with nine entries:
| d c b a 0 1 2 3 4
--------+------------------
inverse | 4 3 2 1 0 a b c d
Addition
Decimal has the following addition table, the right table show the 45 entries that need to be learned:
+ | 0 1 2 3 4 5 6 7 8 9 + | 0 1 2 3 4 5 6 7 8 9
--+----------------------------- --+-----------------------------
0 | 0 1 2 3 4 5 6 7 8 9 0 |
1 | 1 2 3 4 5 6 7 8 9 10 1 | 2
2 | 2 3 4 5 6 7 8 9 10 11 2 | 3 4
3 | 3 4 5 6 7 8 9 10 11 12 3 | 4 5 6
4 | 4 5 6 7 8 9 10 11 12 13 4 | 5 6 7 8
5 | 5 6 7 8 9 10 11 12 13 14 5 | 6 7 8 9 10
6 | 6 7 8 9 10 11 12 13 14 15 6 | 7 8 9 10 11 12
7 | 7 8 9 10 11 12 13 14 15 16 7 | 8 9 10 11 12 13 14
8 | 8 9 10 11 12 13 14 15 16 17 8 | 9 10 11 12 13 14 15 16
9 | 9 10 11 12 13 14 15 16 17 18 9 | 10 11 12 13 14 15 16 17 18
The same table for balanced nonal only has 16 entries that need to be learned:
+ | d c b a 0 1 2 3 4 + | d c b a 0 1 2 3 4
--+-------------------------- --+--------------------------
d |a1 a2 a3 a4 d c b a 0 d |
c |a2 a3 a4 d c b a 0 1 c |
b |a3 a4 d c b a 0 1 2 b |
a |a4 d c b a 0 1 2 3 a |
0 | d c b a 0 1 2 3 4 0 |
1 | c b a 0 1 2 3 4 1d 1 | 2
2 | b a 0 1 2 3 4 1d 1c 2 | 1 3 4
3 | a 0 1 2 3 4 1d 1c 1b 3 | 1 2 4 1d 1c
4 | 0 1 2 3 4 1d 1c 1b 1a 4 | 1 2 3 1d 1c 1b 1a
Note the missing diagonal of zeros (a number plus its inverse is zero), and the missing upper left half (the sum of two numbers is the inverse of the sum of the inverse numbers).
For instance, to calculate
b + d
, you can easily derive the result asb + d = inv(2 + 4) = inv(1c) = a3
.
Multiplication
In decimal, you have to perform quite a bit of tough learning:
* | 0 1 2 3 4 5 6 7 8 9 * | 0 1 2 3 4 5 6 7 8 9
--+----------------------------- --+-----------------------------
0 | 0 0 0 0 0 0 0 0 0 0 0 |
1 | 0 1 2 3 4 5 6 7 8 9 1 |
2 | 0 2 4 6 8 10 12 14 16 18 2 | 4
3 | 0 3 6 9 12 15 18 21 24 27 3 | 6 9
4 | 0 4 8 12 16 20 24 28 32 36 4 | 8 12 16
5 | 0 5 10 15 20 25 30 35 40 45 5 | 10 15 20 25
6 | 0 6 12 18 24 30 36 42 48 54 6 | 12 18 24 30 36
7 | 0 7 14 21 28 35 42 49 56 63 7 | 14 21 28 35 42 49
8 | 0 8 16 24 32 40 48 56 64 72 8 | 16 24 32 40 48 56 64
9 | 0 9 18 27 36 45 54 63 72 81 9 | 18 27 36 45 54 63 72 81
But in balanced nonal, the table on the right is reduced heavily: The three quadrants on the lower left, the upper right and the upper left all follow from the lower right one via symmetry.
* | d c b a 0 1 2 3 4 * | d c b a 0 1 2 3 4
--+-------------------------- --+--------------------------
d |2b 13 1a 4 0 d a1 ac b2 d |
c |13 10 1c 3 0 c a3 a0 ac c |
b |1a 1c 4 2 0 b d a3 a1 b |
a | 4 3 2 1 0 a b c d a |
0 | 0 0 0 0 0 0 0 0 0 0 |
1 | d c b a 0 1 2 3 4 1 |
2 |a1 a3 d b 0 2 4 1c 1a 2 | 4
3 |ac a0 a3 c 0 3 1c 10 13 3 | 1c 10
4 |b2 ac a1 d 0 4 1a 13 2b 4 | 1a 13 2b
For instance, to calculate
c*d
, you can just doc*d = 3*4 = 13
. Or for2*b
, you derive2*b = inv(2*2) = inv(4) = d
. It's really a piece of cake, once you are used to it.
Taking this all together, you need to learn
for decimal:
0 inversions
45 summations
36 multiplications
Total: 81for balanced nonal:
9 inversions
16 summations
6 multiplications
Total: 31
I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
– Wildcard
2 days ago
3
@Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
– cmaster
2 days ago
1
@Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
– cmaster
2 days ago
@Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
– cmaster
2 days ago
add a comment |
up vote
9
down vote
Base-4
This might be a natural choice for a society that perfected digital communication before digital computation.
Digital signals are often transmitted (i.e., “passed through the analogue world”) using quadrature phase-shift keying, a special form of quadrature amplitude modulation. This is generally more performant and reliable than simple amplitude modulation, and more efficient than frequency modulation.
QPSK / QAM by default use four different states, or a multiple of four, as the fundamental unit of information. We usually interpret this as “it always transmits two bits at a time”, but if this method were to be standard before binary computers, we'd probably be used to measure information in quats (?) rather than bits.
Ultimately, the computers would at the lowest level probably end up looking a lot like our binary ones, but with usually two bits paired together to a fundamental “4-logical unit”. Unlike binary-coded decimal, this doesn't incur any overhead of unused binary states.
And it could actually make sense to QPSK-encode even the local communication between processor and memory etc. – wireless transmission everywhere!, thus making the components “base-4 for all that can be seen”.
What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
– leftaroundabout
Nov 27 at 13:59
Re your comment: biological computers, perhaps?
– Wildcard
2 days ago
QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
– endolith
yesterday
1
@endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
– leftaroundabout
yesterday
add a comment |
up vote
6
down vote
It's almost completely irrelevant.
The binary nature of computers is very very very rarely relevant in practice. Just about the only practical situation where the binary nature of computers is relevant is when doing sophisticated error bounds analysis of floating point calculations.
In actual reality, there are quite a few aspects of modern computing which do not even rely on binary representations. For example, we all like SSDs, don't we? Well, modern cheap consumer SSDs are not binary devices -- they use multi-level cells as their fundamental building blocks. For another example, we all like Gigabit Ethernet, don't we? Well, the unit of transmission in Gigabit Ethernet is an indivisible 8-bit octet (transmitted as a 10-bit symbol, but hey, who counts).
No modern computer (all right, hardly any modern computer) can access one bit of storage individually. Most usually, the smallest accessible unit is an octet of eight bits, which can be seen as an indivisible atom with 256 possible values. (And even this is not really true; what exactly is the atomic unit of memory access varies from architecture to architecture. Access to one individual bit is not atomic on any computer I know of.)
Donald Knuth's Art of Computer Programming, which the closest thing we have to a fundamental text in informatics, famously uses the fictional MIX computer for practical examples -- and one of the charming characteristics of MIX is that one does not know whether it's a binary of a decimal computer.
What actually matters is that modern computers are digital -- in the computer, everything is a number. That the numbers in question are represented by tuples of octets is a detail which very rarely has any practical or even theoretical importance.
I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
– pipe
Nov 27 at 9:58
@pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
– AlexP
Nov 27 at 11:12
1
Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
– pipe
Nov 27 at 12:16
@pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
– AlexP
Nov 27 at 12:23
add a comment |
up vote
5
down vote
Get rid of George Boole, inventor of Boolean Algebra, probably the main mathematical foundation of computer logic.
Without Boolean Algebra, regular algebra would give quite an edge to decimal computers, even if you needed three to four times as much hardware per digit.
There's no need to kill him, just have something happen that stops his research or get him interested in another field instead.
Comments are not for extended discussion; this conversation has been moved to chat.
– L.Dutch♦
2 days ago
add a comment |
up vote
5
down vote
EDIT - On reading the answer by L.Dutch, I see that there is an energy-saving argument for using trinary. I'd be interested to find out how theoretically true that is. Crucially the OP talks about transistors rather than thermionic valves and that could make a difference. There are also other energy questions to address other than the simple switching of a transistor. It would be good to know the extent of this saving and any extra cost associated with building and maintaining the hardware. Heat dissipation may also be an issue.
I remain open-minded as well as interested in this approach.
I don't think there is a historical justification for your premise as far as transistors are concerned so instead, I will just say:
The minimum historical change is No Electronics
It's possible to use other bases but just a really bad idea.
IBM 1620 Model I, Level H
IBM 1620 data processing machine with IBM 1627 plotter, on display at
the 1962 Seattle World's Fair The IBM 1620 was announced by IBM on
October 21, 1959,[1] and marketed as an inexpensive "scientific
computer".[2] After a total production of about two thousand machines,
it was withdrawn on November 19, 1970. Modified versions of the 1620
were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process
Control Systems (making it the first digital computer considered
reliable enough for real-time process control of factory
equipment)[citation needed].
Being variable word length decimal, as opposed to
fixed-word-length pure binary, made it an especially attractive first
computer to learn on – and hundreds of thousands of students had their
first experiences with a computer on the IBM 1620.
https://en.wikipedia.org/wiki/IBM_1620
The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level.
Reasoning
Any other electronic system than binary will soon evolve into binary because it depends on digital electronics.
It is commonly supposed, by those not in the know, that zero voltage represent a binary zero and some arbitrary voltage, e.g. 5 volts, represents a 1. However in the real world these voltages are never so precise. It is much easier to have two ranges with a specified changeover point.
Having to maintain say ten different voltages for ten different digits would be incredibly expensive to make, unreliable and not worth the effort.
So your minimum historical change is No Electronics.
3
This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
– RonJohn
Nov 26 at 14:29
2
@RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
– chasly from UK
Nov 26 at 14:38
The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
– alephzero
2 days ago
@alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
– chasly from UK
2 days ago
add a comment |
up vote
4
down vote
As I understand it, early tribes used base 12 and it's a lot more flexible than 10--they had a way to count to 12 by counting knuckles to get up to 60 on two hands pretty easily which is the basis of our "Degrees".
10-finger-counters supposedly defeated the base 12ers but kept their time system and degree-based trigonometry.
If the base 12ers had won, a three-state computer might have made a LOT more sense (Binary might have actually looked silly). In this case A byte would probably be 8 tri-state bits (let's call it 8/3) which would perfectly fit 2 base-12 digits instead of our 8/2 layout which always had a bit of a mis-match.
We tried to cope with our mismatch by using BCD and throwing away 6 states from each nibble (1/2 byte) for a more close approximation of base 10 which gave us a "Pure" math without all these weird binary oddities you get (like how in base 10, 1 byte holds 256 states, 2 bytes hold 65536, etc)
With 3/8, base 12ers would have no mismatch, it would be really clean. Round 3-bit numbers would often look like nice base12 numbers: 1 byte would hold 100 states, and 2 bytes would hold 10000, etc.
So can you change the numeric base of your book? Shouldn't come up too often :) It would be fun to even number pages in base 12... complete submersion.
Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
– Jasper
2 days ago
add a comment |
up vote
3
down vote
Decimal computers.
Modern computers are, indeed, binary. Binary is the classification of an electrical signal as occupying one of two states, conditional on the voltage. For the sake of simplicity, you could say that in a 5V system, anything signal above 4V is a '1' and everything else is a '0'. Once a signal has been confined to two states, it's pretty easy to apply Boolean math, which was already well-explored ahead of computers. Binary was an easy choice for computers because so much work was already done in the area of Boolean algebra.
When we needed to increase the range of numbers, we added more signals. Two signals (two bits) could represent 4 distinct values. 3 bits - 8 values, and so-on. But what if, instead of adding more signals to expand our values, we simply divided the existing signals up more. In a 5V system, one signal could represent a number from 1-10 if we divide up the voltage. 0-0.25 volts = 0. 0.25-0.50 volts = 1. 0.50-0.75 volts = 2, etc. In theory, each signal would carry 5x the data a binary signal could. But why stop there? Why not split each signal into 100 distinct values?
Well, for the same reason we never went further than binary - environmental interference and lack of precision components. You need to be able to precisely measure the voltages to determine the value, and if those voltages change, your system becomes unreliable. All types of factors can affect electrical voltages, RF, temperature, humidity, metal density, etc. As components age, their tolerances tend to decrease.
Any number of things could have changed this - if you use a different medium - light, for example, interference isn't a concern. This is exactly why fiber-optics can carry so much more data than electrical connections.
The discovery of a room-temperature superconductor could also have allowed different computers to become standard. A superconductor doesn't lose electrons to heat. This means you could pump more voltage through a system without fear of overheating, requiring less precise components and less (no) cooling.
So, in-short, binary computers dominate because of physical limitations related to electricity and the the wealth of knowledge (Boolean Algebra) that was already available when vacuum tubes, transistors, and semiconductors came about. Change any of those factors, and binary computers may never have been.
New contributor
add a comment |
up vote
3
down vote
In the late 1950s analog computers were developed using a hydraulic technology called fluidics. Fluidic processing is still used in automatic transmissions, although newer designs are hybrid electronic/fluidic systems.
New contributor
2
Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
– kingledion
Nov 27 at 15:20
add a comment |
up vote
2
down vote
Hypercomputation
According to Wikipedia Hypercomputation is defined to be the following:
Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.
The Church–Turing thesis states that any "effectively computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not effectively computable in the Church–Turing sense.
Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.
What this means is that Hypercomputation can do things computers cannot do. Not in terms of scope limitations such as the ability to access things on a network but rather what can and cannot be fundamentally solved as a mathematical problem.
Consider this. Can a computer store the square root of 2 and operate on it? Well maybe because it could store the coefficients of the polynomial whose solution is that square root and then index the solutions to that polynomial. Alright, so we can the represent so called algebraic numbers (at least I believe so). What about all real numbers? Euler's constant and pi are likely candidates for being unrepresentable in any meaningful sense using binary. We can approximate but we cannot have perfect representations. We could have pi be a special symbol as well as e and just increase the symbolic set. Still not good enough. That's the primary thing that hops to mind to me at least. The ability to digitally compute any real number with perfect precision.
This would be a reason for such a society to never discover binary computers being useful. At some point we switched from analog to binary because of electrical needs and signal stuff. I honestly do not know the details. We modeled the modern notion of processor and other things loosely off of the notion of a Turing Machine which was ultimately the form way of discussing computability which was kind of a multi faceted convergence of sorts. There was the idea of something being human computable and then theoretically computable. The rough abstract definition used for many years ended up converging with that of the notion of the Turing Machine. There was also the set theory concept of something or other (I don't recall the name) that ended up also converging to defining the same exact same concept of "computable". All of these converging basically meant it was said and done. That is what we as a society (or even as the human race for that matter) were able to come up with as a notion of what is and is not programmable. However, that is the convergence of possibly over 3000 years of mathematical development possibly beginning as far back in concept as Euclid when he formalized the most basic concepts of theorems and axioms. Sure math existed but it was just a tool. Nobody had a formal notion of it. Things are just obvious and known. If Hypercomputation is possible for humans to do (rather than it just being a thing limited to machines) then all it would take is one genius in the entire history of math to crack that. I'd say it is a reasonable thing for an alternate history.
add a comment |
up vote
1
down vote
Base-10 computing machines were used commercially to control the early telephone switching system. The telephone companies used them because they were solving a base-10 problem. As long as transistors remain larger and more expensive than mechanical relays, then there's no reason for telephone switchboards to switch to binary.
But that's cheating the spirit of the question. Suppose cheap transistors are invented. Then how can a civilization get out of binary computing? Binary logic is the best way to build a electronic deterministic computer with cheap transistors.
Answer: Analog neural networks outperform manually-programmed computers.
Humans are bad at programming computers directly. Manually-programmed computers can perform only simple unambiguous tasks. Statistical programming, also called "machine learning", can answer questions without clear mathematical answers. Machine learning can answer questions like "is this a picture of a frog". Hand-coding an algorithm to determine "is this a picture of a frog" is well beyond the capabilities of human beings. So are more complex tasks like "enforce security at this railroad station" and "take care of my grandmother in her old age".
Manually-programmed software outnumbers neural-network-based software right now, but that might plausibly be just a phase. Manually-programmed software is easier to create. In a few hundred years, neural-network-based software might outnumber manually-programmed so
One of the most promising methods avenues of machine learning involves neural networks, which uses ideas copied from biological brains. If we invent a good general-purpose AI then it might take the form of a neural network, especially if the AI is based off of the human brain.
If you're designing a computer to execute traditional programs then binary is the best way to go. But if the goal of a microchip is to simulate a human brain then it may be inefficient to build a binary computer and then simulate a human brain on it. It might make more sense to build a neural network into hardware directly. The human brain is an analog device, so a microchip based off of the human brain may be an analog device too.
If someone figured out how to build a powerful general-purpose AI as an analog neural network then chips optimized for neural networks may largely replace binary computers.
add a comment |
up vote
1
down vote
One simple change would be to make solid-state electronics impossible. Either your planet doesn't have abundant silicon, or there is some chemical issue which makes it uneconomic to manufacture semiconductors.
Instead, consider what would happen if Charles Babbage's mechanical computer designs (which were intrinsically decimal devices, just like the mechanical calculators which already existed in Babbage's day) were scaled down to nano-engineering size.
The earliest computers used vacuum tube electronics not semiconductors. The basic design of vacuum tube memory circuits was already known by 1920, long before the first computers, but for large scale computer memory tubes would have been prohibitively large, power-hungry, and unreliable. The earliest computers used various alternative systems - some of which were in effect mechanical, not electrical. So the notion of totally mechanical computers does have some relation to actual history.
add a comment |
up vote
1
down vote
Morse code rules.
https://www.kaspersky.com/blog/telegraph-grandpa-of-internet/9034/
Just as modern keyboards retain the QWERTY of the first typewriters, in your world the trinary code of Morse becomes the language of computers. Computers developed to rapidly send and receive messages naturally use this language to send messages other than written language, and then to communicate between parts of themselves.
There are apparently technical reasons making binary more efficient. https://www.reddit.com/r/askscience/comments/hmy7w/if_morse_is_more_efficient_than_binary_why_dont/
I am fairly certain that there would be more efficient setups than QWERTY as well, but now many decades since there was a need for keys to be spatially distant there is still QWERTY. So too Morse in your world. It was always the language of computers, and endures as such.
1
Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
– kingledion
2 days ago
But Morse code is binary
– endolith
40 mins ago
add a comment |
up vote
1
down vote
Oh, man. Although non-binary computers would be extremely inconvenient, I can easily imagine trinary, decimal or even analog computers becoming dominant, due to a terrible force all developers fear: entrenchment, and the need for legacy support. Lots of times in computing history, we've struggled with decisions made long ago, which we just couldn't break free of (for a long time) from sheer inertia. There's a lot of stuff even in modern processors which we never would have chosen, if it wasn't for the need to support existing software and architecture decisions.
So for your scenario, I imagine that for some reason one type of non-binary computer got a head start. Maybe for many years, computers didn't improve all that much due to some calamity. But software was still written for these weak computers, extremely useful and good software, tons of it. By the time things got going again, it was just much more profitable to focus on making trinary (or whatever) better, rather than trying to redo all the work Ninetel put into their 27-trit processor.
Sure, there are some weirdos claiming that binary is so much more sensible that it's worth it to make a processor that's BISC (binary instruction set circuit) in the bottom with a trinary emulation layer on top. But after the bankruptcy of Transbita, venture capital has mostly lost interest in these projects.
New contributor
add a comment |
up vote
1
down vote
They made quantum computing work much more quickly than we have
Why have binary state, when you can have ifinite?
They probably had binary computers for a short time, then cracked quantum.
"What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?"
Someone cracked a cheap room temperature way to make qbits
(ref: https://medium.com/@jackkrupansky/the-greatest-challenges-for-quantum-computing-are-hardware-and-algorithms-c61061fa1210)
add a comment |
up vote
1
down vote
The creatures involved have three fingers, a ternary numeral system in everyday life, and the technical advantages of binary over ternary aren't as great as the advantages (radix economy, etc) of binary over decimal, so they never bothered to use a system other than the one they naturally knew innately?
New contributor
add a comment |
up vote
0
down vote
Binary computers are simply the most efficient ones, claims to the contrary nonwithstanding (they are based on pretty exotic assumptions, the linked claim that "the future lies in analog computers" is even hilariously wrong though I can see where Ulmann comes from).
Binary computers are simply the most cost and space efficient option if the technology is based on transistors: A ternary computer would require more transistors than a binary one to store or process the same amount of information. The reason is that electrically, the distinction between "zero volts" and "five volts" is more like "anything below 2.0 volts" and "anything above 3.0 volts", which is much easier to control than ternary voltage levels such as "below 1.0 volts, between 2.0 and 3.0 volts, or between 4.0 and 5.0 volts". Yes you need the gaps between the voltage bands, because you need to deal with imprecisions due to noise (manufacturing spread and electric imprecisions); and yes the gaps are pretty large because the larger the gap, the more variance in the integrated circuits is inconsequential and the better is your yield (which is THE most important parameter of submicron manufacturing).
How to get around this?
Either change the driving parameters. In an economy where efficiency isn't even remotely relevant, you can choose convenience. Such an economy will instantly collapse as soon as it touches a more efficient one, so this requires either an isolated economy (Soviet-style or even North-Korea-style, though it requires some extra creativity to design a world economy where a massively less efficient economy isn't voted down on feet - historically this was enforced by oppressive regimes, it might be possible that the people stay at a lower level of income and goods for other reasons).
Or claim basic components that are better at being trinary than transistors. Somebody with a better background in microelectronics than me might be able to propose something that sounds credible, or maybe something that isn't based on classic electrical currents: quantum devices, maybe, or something photonic.
Why is this not done much in literature?
Because, ultimately, it does not matter much whether you have a bits or trits. Either way, you bunch together as many of them as you need to represent N decimal digits. Software engineers don't care much, unless they are the ones that write the basic algorithms for addition/subtraction/etc., or the ones that write the algorithms that needs to be fast (i.e. those that deal with large amounts of data, whether it's a huge list of addresses, or the pixels on the screen).
Some accidental numbers would likely change. Bunching 8 bits into a byte is helpful because 8 is a power of 2, that's why 256 (2^8) tends to pop up in the number of screen colors and various other occasions. With trinary computers, you'd likely use trytes, nine trits, giving 19683 values. HDR would be much later or not happen at all because RGB would already have more color nuances, so there would be some nonobvious differences.
You can simply make it a background fact, never highlight it, just to avoid the explanation.
Which begs the counter-question: What's the plot device you need trinary for?
1
Your answer boils down to: "Your world isn't interesting", which isn't very helpful
– pipe
Nov 27 at 10:03
1
"Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
– alephzero
2 days ago
@alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
– toolforger
2 days ago
add a comment |
up vote
0
down vote
Cryptocurrency scammers having convinced sufficiently many big corporations and governments to become partners in their pyramid scheme that economies of scale make their inefficient and ridiculous ternary-logic hardware cheaper than properly-designed computers.
add a comment |
up vote
-1
down vote
The strength of binary is that it's fundamentally a yes/no logic system, the weakness of binary is that it is fundamentally a yes/no logic system, you need multiple layers of logic to create "yes and" statements with binary logic. The smallest change you would need to make to change away from binary (in terms of having the rest of the world being the same but computing being different) would be to have the people who pioneered the science of computers, particularly Turing (thanks @Renan) aim for, and demand, more complex arrays of basic logic outcomes (a, b, c, etc... vary combinations, all of the above, none of the above). Complex outcome options require more complex inputs, more complex logic gates and a more complex programming language: consequently computers will be more expensive, more delicate, and harder to program.
A few people might mess around with binary for really basic machines, like pocket calculators, but true computers will be more complex machines.
1
You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
– Renan
Nov 26 at 14:34
@Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
– Ash
Nov 26 at 14:36
Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
– chasly from UK
Nov 26 at 14:44
@chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
– Ash
Nov 26 at 14:53
@Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
– AlexP
Nov 27 at 11:06
add a comment |
23 Answers
23
active
oldest
votes
23 Answers
23
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
42
down vote
Non binary computers, in particular ternary computers, have been built in the past (emphasis mine).
One early calculating machine, built by Thomas Fowler entirely from wood in 1840, operated in balanced ternary. The first modern, electronic ternary computer Setun was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.
If you want to make ternary computer the standards, I think you should leverage on those advantages: make energy more expensive, so that saving energy is a big advantage, and make production more expensive.
Note that, since smelting silicon is an energy intensive activity, already increasing the cost of energy will indirectly affect the production costs.
26
L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
– chasly from UK
Nov 26 at 14:00
7
It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
– Artemijs Danilovs
Nov 26 at 15:42
19
From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
– Tangurena
Nov 26 at 16:40
11
Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
– J...
Nov 26 at 18:22
3
@Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
– leftaroundabout
Nov 27 at 13:14
|
show 8 more comments
up vote
42
down vote
Non binary computers, in particular ternary computers, have been built in the past (emphasis mine).
One early calculating machine, built by Thomas Fowler entirely from wood in 1840, operated in balanced ternary. The first modern, electronic ternary computer Setun was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.
If you want to make ternary computer the standards, I think you should leverage on those advantages: make energy more expensive, so that saving energy is a big advantage, and make production more expensive.
Note that, since smelting silicon is an energy intensive activity, already increasing the cost of energy will indirectly affect the production costs.
26
L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
– chasly from UK
Nov 26 at 14:00
7
It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
– Artemijs Danilovs
Nov 26 at 15:42
19
From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
– Tangurena
Nov 26 at 16:40
11
Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
– J...
Nov 26 at 18:22
3
@Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
– leftaroundabout
Nov 27 at 13:14
|
show 8 more comments
up vote
42
down vote
up vote
42
down vote
Non binary computers, in particular ternary computers, have been built in the past (emphasis mine).
One early calculating machine, built by Thomas Fowler entirely from wood in 1840, operated in balanced ternary. The first modern, electronic ternary computer Setun was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.
If you want to make ternary computer the standards, I think you should leverage on those advantages: make energy more expensive, so that saving energy is a big advantage, and make production more expensive.
Note that, since smelting silicon is an energy intensive activity, already increasing the cost of energy will indirectly affect the production costs.
Non binary computers, in particular ternary computers, have been built in the past (emphasis mine).
One early calculating machine, built by Thomas Fowler entirely from wood in 1840, operated in balanced ternary. The first modern, electronic ternary computer Setun was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.
If you want to make ternary computer the standards, I think you should leverage on those advantages: make energy more expensive, so that saving energy is a big advantage, and make production more expensive.
Note that, since smelting silicon is an energy intensive activity, already increasing the cost of energy will indirectly affect the production costs.
answered Nov 26 at 13:29
L.Dutch♦
71.5k22173345
71.5k22173345
26
L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
– chasly from UK
Nov 26 at 14:00
7
It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
– Artemijs Danilovs
Nov 26 at 15:42
19
From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
– Tangurena
Nov 26 at 16:40
11
Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
– J...
Nov 26 at 18:22
3
@Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
– leftaroundabout
Nov 27 at 13:14
|
show 8 more comments
26
L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
– chasly from UK
Nov 26 at 14:00
7
It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
– Artemijs Danilovs
Nov 26 at 15:42
19
From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
– Tangurena
Nov 26 at 16:40
11
Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
– J...
Nov 26 at 18:22
3
@Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
– leftaroundabout
Nov 27 at 13:14
26
26
L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
– chasly from UK
Nov 26 at 14:00
L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
– chasly from UK
Nov 26 at 14:00
7
7
It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
– Artemijs Danilovs
Nov 26 at 15:42
It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
– Artemijs Danilovs
Nov 26 at 15:42
19
19
From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
– Tangurena
Nov 26 at 16:40
From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
– Tangurena
Nov 26 at 16:40
11
11
Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
– J...
Nov 26 at 18:22
Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
– J...
Nov 26 at 18:22
3
3
@Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
– leftaroundabout
Nov 27 at 13:14
@Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
– leftaroundabout
Nov 27 at 13:14
|
show 8 more comments
up vote
34
down vote
Instead of avoiding it, transcend binary:
Either let the evolution of technology take its course and somehow create a demand for non-binary processors. Analogous to what is happening now in the crypto currency scene: The developers of IOTA based their project on a ternary architecture model and are even working on a ternary processor (JINN).
Or let aggressive patenting and licensing in the early stages of binary processors (e.g. a general patent for binary processors due to lobbying or misjudgements in the patent office) be the cause for starting work on non-binary processors with less restrictive and more collaborative patents.
Patentability requirements are: novelty, usefulness, and non-obviousness1.
[the] nonobviousness principle asks whether the invention is an
adequate distance beyond or above the state of the art2
So this could be used to have a patent granted on binary processors. And even if it was an illegitimate patent, that would be revoked in future lawsuits, this situation could give rise to non-binary processors.
New contributor
23
You should focus on that second point and expand it more, that sounds interesting.
– kingledion
Nov 26 at 14:41
Free/open hardware doesn't get monetized very well.
– RonJohn
Nov 26 at 14:42
@RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
– mike
Nov 26 at 14:45
1
Advanced quantum computers could be a good choice for option one.
– Vaelus
Nov 26 at 15:54
2
@JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
– Vaelus
Nov 26 at 16:37
|
show 2 more comments
up vote
34
down vote
Instead of avoiding it, transcend binary:
Either let the evolution of technology take its course and somehow create a demand for non-binary processors. Analogous to what is happening now in the crypto currency scene: The developers of IOTA based their project on a ternary architecture model and are even working on a ternary processor (JINN).
Or let aggressive patenting and licensing in the early stages of binary processors (e.g. a general patent for binary processors due to lobbying or misjudgements in the patent office) be the cause for starting work on non-binary processors with less restrictive and more collaborative patents.
Patentability requirements are: novelty, usefulness, and non-obviousness1.
[the] nonobviousness principle asks whether the invention is an
adequate distance beyond or above the state of the art2
So this could be used to have a patent granted on binary processors. And even if it was an illegitimate patent, that would be revoked in future lawsuits, this situation could give rise to non-binary processors.
New contributor
23
You should focus on that second point and expand it more, that sounds interesting.
– kingledion
Nov 26 at 14:41
Free/open hardware doesn't get monetized very well.
– RonJohn
Nov 26 at 14:42
@RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
– mike
Nov 26 at 14:45
1
Advanced quantum computers could be a good choice for option one.
– Vaelus
Nov 26 at 15:54
2
@JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
– Vaelus
Nov 26 at 16:37
|
show 2 more comments
up vote
34
down vote
up vote
34
down vote
Instead of avoiding it, transcend binary:
Either let the evolution of technology take its course and somehow create a demand for non-binary processors. Analogous to what is happening now in the crypto currency scene: The developers of IOTA based their project on a ternary architecture model and are even working on a ternary processor (JINN).
Or let aggressive patenting and licensing in the early stages of binary processors (e.g. a general patent for binary processors due to lobbying or misjudgements in the patent office) be the cause for starting work on non-binary processors with less restrictive and more collaborative patents.
Patentability requirements are: novelty, usefulness, and non-obviousness1.
[the] nonobviousness principle asks whether the invention is an
adequate distance beyond or above the state of the art2
So this could be used to have a patent granted on binary processors. And even if it was an illegitimate patent, that would be revoked in future lawsuits, this situation could give rise to non-binary processors.
New contributor
Instead of avoiding it, transcend binary:
Either let the evolution of technology take its course and somehow create a demand for non-binary processors. Analogous to what is happening now in the crypto currency scene: The developers of IOTA based their project on a ternary architecture model and are even working on a ternary processor (JINN).
Or let aggressive patenting and licensing in the early stages of binary processors (e.g. a general patent for binary processors due to lobbying or misjudgements in the patent office) be the cause for starting work on non-binary processors with less restrictive and more collaborative patents.
Patentability requirements are: novelty, usefulness, and non-obviousness1.
[the] nonobviousness principle asks whether the invention is an
adequate distance beyond or above the state of the art2
So this could be used to have a patent granted on binary processors. And even if it was an illegitimate patent, that would be revoked in future lawsuits, this situation could give rise to non-binary processors.
New contributor
edited Nov 27 at 13:51
New contributor
answered Nov 26 at 14:40
mike
44816
44816
New contributor
New contributor
23
You should focus on that second point and expand it more, that sounds interesting.
– kingledion
Nov 26 at 14:41
Free/open hardware doesn't get monetized very well.
– RonJohn
Nov 26 at 14:42
@RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
– mike
Nov 26 at 14:45
1
Advanced quantum computers could be a good choice for option one.
– Vaelus
Nov 26 at 15:54
2
@JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
– Vaelus
Nov 26 at 16:37
|
show 2 more comments
23
You should focus on that second point and expand it more, that sounds interesting.
– kingledion
Nov 26 at 14:41
Free/open hardware doesn't get monetized very well.
– RonJohn
Nov 26 at 14:42
@RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
– mike
Nov 26 at 14:45
1
Advanced quantum computers could be a good choice for option one.
– Vaelus
Nov 26 at 15:54
2
@JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
– Vaelus
Nov 26 at 16:37
23
23
You should focus on that second point and expand it more, that sounds interesting.
– kingledion
Nov 26 at 14:41
You should focus on that second point and expand it more, that sounds interesting.
– kingledion
Nov 26 at 14:41
Free/open hardware doesn't get monetized very well.
– RonJohn
Nov 26 at 14:42
Free/open hardware doesn't get monetized very well.
– RonJohn
Nov 26 at 14:42
@RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
– mike
Nov 26 at 14:45
@RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
– mike
Nov 26 at 14:45
1
1
Advanced quantum computers could be a good choice for option one.
– Vaelus
Nov 26 at 15:54
Advanced quantum computers could be a good choice for option one.
– Vaelus
Nov 26 at 15:54
2
2
@JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
– Vaelus
Nov 26 at 16:37
@JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
– Vaelus
Nov 26 at 16:37
|
show 2 more comments
up vote
29
down vote
I would like to advance the idea of an analog computer.
Analog computers are something like the holy grail of electronics. They have the potential of nearly infinite more computing power, limited only by the voltgae or current measuring discriminator (i.e., the precision of measuring an electric state or condition).
The reason we don't have them is because using transistors in their switching mode is simple. Simple, simple, simple. So simple, that defaulting everything to the lowest common denominator (binary, single-variable logic) was obvious.
But even today, change is coming.
Analog computing, which was the predominant form of high-performance computing well into the 1970s, has largely been forgotten since today's stored program digital computers took over. But the time is ripe to change this. (Source)
If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.
...
They were complex, quirky machines, requiring specially trained personnel to understand and run them—a fact that played a role in their demise.
Another factor in their downfall was that by the 1960s digital computers were making large strides, thanks to their many advantages: straightforward programmability, algorithmic operation, ease of storage, high precision, and an ability to handle problems of any size, given enough time. (Source)
But, how to get there without getting hung up on the digital world?
A breakthrough in discrimination. Transistors, for all their value, are only as good as their manufacturing process. The more precisely constructed the transistor, the more precise the voltage measurement can be. The more precise the voltage measurement, the greater the programatic value of a change in voltage = faster computing and (best of all for most space applications) faster reaction to the environment.
Breakthrough in modeling equations. Digital computers are, by comparison, trivial to program (hence, BASIC). Their inefficiency is irrelevant compared to their ease of use. However, this is because double-integration is a whomping difficult thing to do on paper, much less to describe such that a machine can process it. But, what if we could have languages like Wolfram, R, or Haskell without having to go through the digital revolution of BASIC, PASCAL, FORTRAN, and C first? Our view of programming is very much based on how we perceive (or are influenced by) the nature of computation. Had someone come up with an efficient and flexible mathematical language before the discovery of switching transistors... the world would have changed forever.
Would this entirely remove digital from the picture?
Heck, no. That's like saying the development of a practical Lamborghini (if the word practical can ever be applied to a Lamborghini) before, say, the Edsel would mean we would have never seen the Datsun B210. The single biggest weakness of analog computing is the human-to-machine interface. The ability to compute in real time rather than through a series of discrete often barely related steps is how our brains work — but that doesn't translate well to telling a machine how to do its job. The odds are good that a hybrid machine (digital interface to an analog core) would be the final solution (as it will today). Is this germain to your question? Not particularly.
Conclusion
Two breakthroughs: one in transistor manufacture and the other in symbolic programming, are all that would be needed to advance analog computation with all of its limitless computational power over digital computing.
It's happening, although slowly: scientificamerican.com/article/…
– Jan Dorniak
Nov 26 at 20:39
8
If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
– AShelly
Nov 26 at 23:40
2
The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
– MSalters
Nov 27 at 15:35
The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
– BentNielsen
yesterday
add a comment |
up vote
29
down vote
I would like to advance the idea of an analog computer.
Analog computers are something like the holy grail of electronics. They have the potential of nearly infinite more computing power, limited only by the voltgae or current measuring discriminator (i.e., the precision of measuring an electric state or condition).
The reason we don't have them is because using transistors in their switching mode is simple. Simple, simple, simple. So simple, that defaulting everything to the lowest common denominator (binary, single-variable logic) was obvious.
But even today, change is coming.
Analog computing, which was the predominant form of high-performance computing well into the 1970s, has largely been forgotten since today's stored program digital computers took over. But the time is ripe to change this. (Source)
If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.
...
They were complex, quirky machines, requiring specially trained personnel to understand and run them—a fact that played a role in their demise.
Another factor in their downfall was that by the 1960s digital computers were making large strides, thanks to their many advantages: straightforward programmability, algorithmic operation, ease of storage, high precision, and an ability to handle problems of any size, given enough time. (Source)
But, how to get there without getting hung up on the digital world?
A breakthrough in discrimination. Transistors, for all their value, are only as good as their manufacturing process. The more precisely constructed the transistor, the more precise the voltage measurement can be. The more precise the voltage measurement, the greater the programatic value of a change in voltage = faster computing and (best of all for most space applications) faster reaction to the environment.
Breakthrough in modeling equations. Digital computers are, by comparison, trivial to program (hence, BASIC). Their inefficiency is irrelevant compared to their ease of use. However, this is because double-integration is a whomping difficult thing to do on paper, much less to describe such that a machine can process it. But, what if we could have languages like Wolfram, R, or Haskell without having to go through the digital revolution of BASIC, PASCAL, FORTRAN, and C first? Our view of programming is very much based on how we perceive (or are influenced by) the nature of computation. Had someone come up with an efficient and flexible mathematical language before the discovery of switching transistors... the world would have changed forever.
Would this entirely remove digital from the picture?
Heck, no. That's like saying the development of a practical Lamborghini (if the word practical can ever be applied to a Lamborghini) before, say, the Edsel would mean we would have never seen the Datsun B210. The single biggest weakness of analog computing is the human-to-machine interface. The ability to compute in real time rather than through a series of discrete often barely related steps is how our brains work — but that doesn't translate well to telling a machine how to do its job. The odds are good that a hybrid machine (digital interface to an analog core) would be the final solution (as it will today). Is this germain to your question? Not particularly.
Conclusion
Two breakthroughs: one in transistor manufacture and the other in symbolic programming, are all that would be needed to advance analog computation with all of its limitless computational power over digital computing.
It's happening, although slowly: scientificamerican.com/article/…
– Jan Dorniak
Nov 26 at 20:39
8
If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
– AShelly
Nov 26 at 23:40
2
The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
– MSalters
Nov 27 at 15:35
The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
– BentNielsen
yesterday
add a comment |
up vote
29
down vote
up vote
29
down vote
I would like to advance the idea of an analog computer.
Analog computers are something like the holy grail of electronics. They have the potential of nearly infinite more computing power, limited only by the voltgae or current measuring discriminator (i.e., the precision of measuring an electric state or condition).
The reason we don't have them is because using transistors in their switching mode is simple. Simple, simple, simple. So simple, that defaulting everything to the lowest common denominator (binary, single-variable logic) was obvious.
But even today, change is coming.
Analog computing, which was the predominant form of high-performance computing well into the 1970s, has largely been forgotten since today's stored program digital computers took over. But the time is ripe to change this. (Source)
If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.
...
They were complex, quirky machines, requiring specially trained personnel to understand and run them—a fact that played a role in their demise.
Another factor in their downfall was that by the 1960s digital computers were making large strides, thanks to their many advantages: straightforward programmability, algorithmic operation, ease of storage, high precision, and an ability to handle problems of any size, given enough time. (Source)
But, how to get there without getting hung up on the digital world?
A breakthrough in discrimination. Transistors, for all their value, are only as good as their manufacturing process. The more precisely constructed the transistor, the more precise the voltage measurement can be. The more precise the voltage measurement, the greater the programatic value of a change in voltage = faster computing and (best of all for most space applications) faster reaction to the environment.
Breakthrough in modeling equations. Digital computers are, by comparison, trivial to program (hence, BASIC). Their inefficiency is irrelevant compared to their ease of use. However, this is because double-integration is a whomping difficult thing to do on paper, much less to describe such that a machine can process it. But, what if we could have languages like Wolfram, R, or Haskell without having to go through the digital revolution of BASIC, PASCAL, FORTRAN, and C first? Our view of programming is very much based on how we perceive (or are influenced by) the nature of computation. Had someone come up with an efficient and flexible mathematical language before the discovery of switching transistors... the world would have changed forever.
Would this entirely remove digital from the picture?
Heck, no. That's like saying the development of a practical Lamborghini (if the word practical can ever be applied to a Lamborghini) before, say, the Edsel would mean we would have never seen the Datsun B210. The single biggest weakness of analog computing is the human-to-machine interface. The ability to compute in real time rather than through a series of discrete often barely related steps is how our brains work — but that doesn't translate well to telling a machine how to do its job. The odds are good that a hybrid machine (digital interface to an analog core) would be the final solution (as it will today). Is this germain to your question? Not particularly.
Conclusion
Two breakthroughs: one in transistor manufacture and the other in symbolic programming, are all that would be needed to advance analog computation with all of its limitless computational power over digital computing.
I would like to advance the idea of an analog computer.
Analog computers are something like the holy grail of electronics. They have the potential of nearly infinite more computing power, limited only by the voltgae or current measuring discriminator (i.e., the precision of measuring an electric state or condition).
The reason we don't have them is because using transistors in their switching mode is simple. Simple, simple, simple. So simple, that defaulting everything to the lowest common denominator (binary, single-variable logic) was obvious.
But even today, change is coming.
Analog computing, which was the predominant form of high-performance computing well into the 1970s, has largely been forgotten since today's stored program digital computers took over. But the time is ripe to change this. (Source)
If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.
...
They were complex, quirky machines, requiring specially trained personnel to understand and run them—a fact that played a role in their demise.
Another factor in their downfall was that by the 1960s digital computers were making large strides, thanks to their many advantages: straightforward programmability, algorithmic operation, ease of storage, high precision, and an ability to handle problems of any size, given enough time. (Source)
But, how to get there without getting hung up on the digital world?
A breakthrough in discrimination. Transistors, for all their value, are only as good as their manufacturing process. The more precisely constructed the transistor, the more precise the voltage measurement can be. The more precise the voltage measurement, the greater the programatic value of a change in voltage = faster computing and (best of all for most space applications) faster reaction to the environment.
Breakthrough in modeling equations. Digital computers are, by comparison, trivial to program (hence, BASIC). Their inefficiency is irrelevant compared to their ease of use. However, this is because double-integration is a whomping difficult thing to do on paper, much less to describe such that a machine can process it. But, what if we could have languages like Wolfram, R, or Haskell without having to go through the digital revolution of BASIC, PASCAL, FORTRAN, and C first? Our view of programming is very much based on how we perceive (or are influenced by) the nature of computation. Had someone come up with an efficient and flexible mathematical language before the discovery of switching transistors... the world would have changed forever.
Would this entirely remove digital from the picture?
Heck, no. That's like saying the development of a practical Lamborghini (if the word practical can ever be applied to a Lamborghini) before, say, the Edsel would mean we would have never seen the Datsun B210. The single biggest weakness of analog computing is the human-to-machine interface. The ability to compute in real time rather than through a series of discrete often barely related steps is how our brains work — but that doesn't translate well to telling a machine how to do its job. The odds are good that a hybrid machine (digital interface to an analog core) would be the final solution (as it will today). Is this germain to your question? Not particularly.
Conclusion
Two breakthroughs: one in transistor manufacture and the other in symbolic programming, are all that would be needed to advance analog computation with all of its limitless computational power over digital computing.
answered Nov 26 at 16:15
JBH
37.6k583178
37.6k583178
It's happening, although slowly: scientificamerican.com/article/…
– Jan Dorniak
Nov 26 at 20:39
8
If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
– AShelly
Nov 26 at 23:40
2
The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
– MSalters
Nov 27 at 15:35
The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
– BentNielsen
yesterday
add a comment |
It's happening, although slowly: scientificamerican.com/article/…
– Jan Dorniak
Nov 26 at 20:39
8
If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
– AShelly
Nov 26 at 23:40
2
The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
– MSalters
Nov 27 at 15:35
The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
– BentNielsen
yesterday
It's happening, although slowly: scientificamerican.com/article/…
– Jan Dorniak
Nov 26 at 20:39
It's happening, although slowly: scientificamerican.com/article/…
– Jan Dorniak
Nov 26 at 20:39
8
8
If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
– AShelly
Nov 26 at 23:40
If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
– AShelly
Nov 26 at 23:40
2
2
The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
– MSalters
Nov 27 at 15:35
The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
– MSalters
Nov 27 at 15:35
The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
– BentNielsen
yesterday
The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
– BentNielsen
yesterday
add a comment |
up vote
14
down vote
Having thought about this and looked at L.Dutch's answer, I may withdraw my original answer (or leave it just for interest).
Instead I will give a political answer.
As mentioned by L.Dutch, the Soviets came up with a ternary system (see below). Because of the limited use of the Russian language throughout the world the Soviets often resented the fact that US scientific papers got more credence - after all English is the Lingua Franca of science. (This is true by the way, not a fiction, I'll look for references).
Suppose the Russians had won a war over the West. It was common in Soviet Russia for science to be heavily politicised (again I'll look for references). Therefore, regardless of the validity of a non-binary system the Russians could have mandated ternary or some other base simply as a form of triumphalism.
Note - I'm chickening out of finding references at the moment. I've found some but they involve delving into Marxist doctrine or buying an expensive book. My personal knowledge of the situation came from talking to a British scientist who was digging through old Russian papers looking for bits that had been missed or had been distorted by doctrine. Maybe I'll delve further but not right now.
The first modern, electronic ternary computer Setun was built in 1958
in the Soviet Union at the Moscow State University by Nikolay
Brusentsov
https://en.wikipedia.org/wiki/Ternary_computer
This would hardly be a minimal change.
– mike
Nov 26 at 14:56
3
@mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
– chasly from UK
Nov 26 at 14:58
3
I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
– mike
Nov 26 at 15:12
A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
– 0something0
11 hours ago
add a comment |
up vote
14
down vote
Having thought about this and looked at L.Dutch's answer, I may withdraw my original answer (or leave it just for interest).
Instead I will give a political answer.
As mentioned by L.Dutch, the Soviets came up with a ternary system (see below). Because of the limited use of the Russian language throughout the world the Soviets often resented the fact that US scientific papers got more credence - after all English is the Lingua Franca of science. (This is true by the way, not a fiction, I'll look for references).
Suppose the Russians had won a war over the West. It was common in Soviet Russia for science to be heavily politicised (again I'll look for references). Therefore, regardless of the validity of a non-binary system the Russians could have mandated ternary or some other base simply as a form of triumphalism.
Note - I'm chickening out of finding references at the moment. I've found some but they involve delving into Marxist doctrine or buying an expensive book. My personal knowledge of the situation came from talking to a British scientist who was digging through old Russian papers looking for bits that had been missed or had been distorted by doctrine. Maybe I'll delve further but not right now.
The first modern, electronic ternary computer Setun was built in 1958
in the Soviet Union at the Moscow State University by Nikolay
Brusentsov
https://en.wikipedia.org/wiki/Ternary_computer
This would hardly be a minimal change.
– mike
Nov 26 at 14:56
3
@mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
– chasly from UK
Nov 26 at 14:58
3
I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
– mike
Nov 26 at 15:12
A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
– 0something0
11 hours ago
add a comment |
up vote
14
down vote
up vote
14
down vote
Having thought about this and looked at L.Dutch's answer, I may withdraw my original answer (or leave it just for interest).
Instead I will give a political answer.
As mentioned by L.Dutch, the Soviets came up with a ternary system (see below). Because of the limited use of the Russian language throughout the world the Soviets often resented the fact that US scientific papers got more credence - after all English is the Lingua Franca of science. (This is true by the way, not a fiction, I'll look for references).
Suppose the Russians had won a war over the West. It was common in Soviet Russia for science to be heavily politicised (again I'll look for references). Therefore, regardless of the validity of a non-binary system the Russians could have mandated ternary or some other base simply as a form of triumphalism.
Note - I'm chickening out of finding references at the moment. I've found some but they involve delving into Marxist doctrine or buying an expensive book. My personal knowledge of the situation came from talking to a British scientist who was digging through old Russian papers looking for bits that had been missed or had been distorted by doctrine. Maybe I'll delve further but not right now.
The first modern, electronic ternary computer Setun was built in 1958
in the Soviet Union at the Moscow State University by Nikolay
Brusentsov
https://en.wikipedia.org/wiki/Ternary_computer
Having thought about this and looked at L.Dutch's answer, I may withdraw my original answer (or leave it just for interest).
Instead I will give a political answer.
As mentioned by L.Dutch, the Soviets came up with a ternary system (see below). Because of the limited use of the Russian language throughout the world the Soviets often resented the fact that US scientific papers got more credence - after all English is the Lingua Franca of science. (This is true by the way, not a fiction, I'll look for references).
Suppose the Russians had won a war over the West. It was common in Soviet Russia for science to be heavily politicised (again I'll look for references). Therefore, regardless of the validity of a non-binary system the Russians could have mandated ternary or some other base simply as a form of triumphalism.
Note - I'm chickening out of finding references at the moment. I've found some but they involve delving into Marxist doctrine or buying an expensive book. My personal knowledge of the situation came from talking to a British scientist who was digging through old Russian papers looking for bits that had been missed or had been distorted by doctrine. Maybe I'll delve further but not right now.
The first modern, electronic ternary computer Setun was built in 1958
in the Soviet Union at the Moscow State University by Nikolay
Brusentsov
https://en.wikipedia.org/wiki/Ternary_computer
edited Nov 26 at 16:29
answered Nov 26 at 14:53
chasly from UK
8,55934086
8,55934086
This would hardly be a minimal change.
– mike
Nov 26 at 14:56
3
@mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
– chasly from UK
Nov 26 at 14:58
3
I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
– mike
Nov 26 at 15:12
A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
– 0something0
11 hours ago
add a comment |
This would hardly be a minimal change.
– mike
Nov 26 at 14:56
3
@mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
– chasly from UK
Nov 26 at 14:58
3
I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
– mike
Nov 26 at 15:12
A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
– 0something0
11 hours ago
This would hardly be a minimal change.
– mike
Nov 26 at 14:56
This would hardly be a minimal change.
– mike
Nov 26 at 14:56
3
3
@mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
– chasly from UK
Nov 26 at 14:58
@mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
– chasly from UK
Nov 26 at 14:58
3
3
I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
– mike
Nov 26 at 15:12
I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
– mike
Nov 26 at 15:12
A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
– 0something0
11 hours ago
A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
– 0something0
11 hours ago
add a comment |
up vote
11
down vote
A ternary system would be preferred in a world where data storage cost exceeds all other cost considerations in computers. This preference would be due to radix economy, which essentially quantifies the relative cost of storing numbers in a particular numbering system. Euler's number e ≈ 2.718 has the lowest radix economy. Among integers, 3 has the lowest radix economy, lower than 2 and 4 (which have the same).
If the first storage medium used for computing would have stored ternary digits for less or just slightly more cost than binary digits, and if processing cost would have been insignificant compared to storage cost, ternary computing might have become the dominant standard. The advantage of ternary systems is small (around 5 percent), but could be important if storage cost was a serious consideration.
Binary computers dominate today mostly because electricity was the first effective medium to store and process numbers, and a single threshold voltage to distinguish between two states is easier to manage than two or more thresholds for three or more states.
Build your transistors in a medium that can store and process ternary digits efficiently, and emphasize on the high cost of storing. A mechanical example would be a switch that can take three positions in a triangle.
New contributor
Very interesting!
– kingledion
Nov 27 at 15:11
add a comment |
up vote
11
down vote
A ternary system would be preferred in a world where data storage cost exceeds all other cost considerations in computers. This preference would be due to radix economy, which essentially quantifies the relative cost of storing numbers in a particular numbering system. Euler's number e ≈ 2.718 has the lowest radix economy. Among integers, 3 has the lowest radix economy, lower than 2 and 4 (which have the same).
If the first storage medium used for computing would have stored ternary digits for less or just slightly more cost than binary digits, and if processing cost would have been insignificant compared to storage cost, ternary computing might have become the dominant standard. The advantage of ternary systems is small (around 5 percent), but could be important if storage cost was a serious consideration.
Binary computers dominate today mostly because electricity was the first effective medium to store and process numbers, and a single threshold voltage to distinguish between two states is easier to manage than two or more thresholds for three or more states.
Build your transistors in a medium that can store and process ternary digits efficiently, and emphasize on the high cost of storing. A mechanical example would be a switch that can take three positions in a triangle.
New contributor
Very interesting!
– kingledion
Nov 27 at 15:11
add a comment |
up vote
11
down vote
up vote
11
down vote
A ternary system would be preferred in a world where data storage cost exceeds all other cost considerations in computers. This preference would be due to radix economy, which essentially quantifies the relative cost of storing numbers in a particular numbering system. Euler's number e ≈ 2.718 has the lowest radix economy. Among integers, 3 has the lowest radix economy, lower than 2 and 4 (which have the same).
If the first storage medium used for computing would have stored ternary digits for less or just slightly more cost than binary digits, and if processing cost would have been insignificant compared to storage cost, ternary computing might have become the dominant standard. The advantage of ternary systems is small (around 5 percent), but could be important if storage cost was a serious consideration.
Binary computers dominate today mostly because electricity was the first effective medium to store and process numbers, and a single threshold voltage to distinguish between two states is easier to manage than two or more thresholds for three or more states.
Build your transistors in a medium that can store and process ternary digits efficiently, and emphasize on the high cost of storing. A mechanical example would be a switch that can take three positions in a triangle.
New contributor
A ternary system would be preferred in a world where data storage cost exceeds all other cost considerations in computers. This preference would be due to radix economy, which essentially quantifies the relative cost of storing numbers in a particular numbering system. Euler's number e ≈ 2.718 has the lowest radix economy. Among integers, 3 has the lowest radix economy, lower than 2 and 4 (which have the same).
If the first storage medium used for computing would have stored ternary digits for less or just slightly more cost than binary digits, and if processing cost would have been insignificant compared to storage cost, ternary computing might have become the dominant standard. The advantage of ternary systems is small (around 5 percent), but could be important if storage cost was a serious consideration.
Binary computers dominate today mostly because electricity was the first effective medium to store and process numbers, and a single threshold voltage to distinguish between two states is easier to manage than two or more thresholds for three or more states.
Build your transistors in a medium that can store and process ternary digits efficiently, and emphasize on the high cost of storing. A mechanical example would be a switch that can take three positions in a triangle.
New contributor
New contributor
answered Nov 27 at 11:49
pommy
2112
2112
New contributor
New contributor
Very interesting!
– kingledion
Nov 27 at 15:11
add a comment |
Very interesting!
– kingledion
Nov 27 at 15:11
Very interesting!
– kingledion
Nov 27 at 15:11
Very interesting!
– kingledion
Nov 27 at 15:11
add a comment |
up vote
10
down vote
Toolforger has one thing right: Binary computers are the most efficient computing devices possible. Period. Ternary has no technological advantage, whatsoever.
However, I'm going to give a suggestion of how you can offset the disadvantage of ternary computing, to allow your society to actually use ternary computers instead of binary ones:
Your society has evolved to use a balanced numeral system.
Balanced numeral systems don't just use positive digits like we do, they use an equal number of negative and positive digits. As such, balanced ternary uses three digits for -1, 0, and 1 instead of the unbalanced 0, 1, and 2. This has several beneficial consequences:
Balanced numeral systems have symmetries that unbalanced systems lack. Not only can you exploit commutativity when doing calculations (you know what 2+3 is, so you know what 3+2 is), but also symmetries based on sign: -3-2 = -(3+2), -3*2 = 3*-2, -3*-2 = 3*2, and 3*-2 = -(3*2).
You have more computations with trivial outcome: x+(-x) = 0 and -1*x = -x.
The effect is, that you have much less to learn when learning balanced numeral systems. For instance, unbalanced decimal requires you to learn 81 data points by heart to perform all the four basic computations, whereas balanced nonal (9 digits from -4 to 4) requires only 31 data points, of which only 6 are for multiplication. The right-most column uses `-4 = d, -3 = c, -2 = b, and -1 = a as negative digits:
2*2 = 0*9 +4 = 4
2*3 = 1*9 -3 = 1c
2*4 = 1*9 -1 = 1a
3*3 = 1*9 +0 = 10
3*4 = 1*9 +3 = 13
4*4 = 2*9 -2 = 2b
The entire rest is either trivial or follows from symmetries. That's all the multiplication table your school kids need to learn!
Because you can get both positive and negative carries, you get much less and smaller carries in long additions. They simply tend to cancel each other out.
Because you have negative digits as well as positive ones, negative numbers are just an integral part of the system. In decimal, you have to decide which number is greater when doing a subtraction, then subtract the smaller number from the larger one, then reattach a sign to the result based on which of the two numbers was greater. In balanced systems you don't care which number is greater, you just do the subtraction. Then you look at the result and see whether it's positive or negative...
As a matter of fact, I once learned to use balanced nonal just for fun, and in general, it's indeed much easier to use than decimal.
My point is: To anyone who has been brought up calculating in a balanced numeral system, an unbalanced system would just feel so unimaginable awkward and cumbersome that they will basically think that ternary is the smallest base you can use. Because binary lacks the negative digits, how are you supposed to compute with that? What do you do when you subtract 5 from 2? You absolutely need a -1 for that!
As such, a society of people with a balanced numeral system background may conceivably settle on balanced ternary computers instead of binary ones. And once a chunk of nine balanced ternary digits has been generally accepted as the smallest unit of information exchange, no one will want to use 15 bits (what an awkward number!) to transmit the same amount of information in a binary fashion, with all the losses that would imply.
The result is basically a lock-in effect to balanced ternary that would keep people from using binary hardware.
Aside: Unbalanced decimal vs. balanced nonal
Here is a more detailed comparison between decimal and balanced nonal. I'm using a, b, c, d
as the negative digits -1, -2, -3, -4
here, respectively:
Negation
Here the learing effort for decimal is zero. For balanced nonal, you have to learn the following table with nine entries:
| d c b a 0 1 2 3 4
--------+------------------
inverse | 4 3 2 1 0 a b c d
Addition
Decimal has the following addition table, the right table show the 45 entries that need to be learned:
+ | 0 1 2 3 4 5 6 7 8 9 + | 0 1 2 3 4 5 6 7 8 9
--+----------------------------- --+-----------------------------
0 | 0 1 2 3 4 5 6 7 8 9 0 |
1 | 1 2 3 4 5 6 7 8 9 10 1 | 2
2 | 2 3 4 5 6 7 8 9 10 11 2 | 3 4
3 | 3 4 5 6 7 8 9 10 11 12 3 | 4 5 6
4 | 4 5 6 7 8 9 10 11 12 13 4 | 5 6 7 8
5 | 5 6 7 8 9 10 11 12 13 14 5 | 6 7 8 9 10
6 | 6 7 8 9 10 11 12 13 14 15 6 | 7 8 9 10 11 12
7 | 7 8 9 10 11 12 13 14 15 16 7 | 8 9 10 11 12 13 14
8 | 8 9 10 11 12 13 14 15 16 17 8 | 9 10 11 12 13 14 15 16
9 | 9 10 11 12 13 14 15 16 17 18 9 | 10 11 12 13 14 15 16 17 18
The same table for balanced nonal only has 16 entries that need to be learned:
+ | d c b a 0 1 2 3 4 + | d c b a 0 1 2 3 4
--+-------------------------- --+--------------------------
d |a1 a2 a3 a4 d c b a 0 d |
c |a2 a3 a4 d c b a 0 1 c |
b |a3 a4 d c b a 0 1 2 b |
a |a4 d c b a 0 1 2 3 a |
0 | d c b a 0 1 2 3 4 0 |
1 | c b a 0 1 2 3 4 1d 1 | 2
2 | b a 0 1 2 3 4 1d 1c 2 | 1 3 4
3 | a 0 1 2 3 4 1d 1c 1b 3 | 1 2 4 1d 1c
4 | 0 1 2 3 4 1d 1c 1b 1a 4 | 1 2 3 1d 1c 1b 1a
Note the missing diagonal of zeros (a number plus its inverse is zero), and the missing upper left half (the sum of two numbers is the inverse of the sum of the inverse numbers).
For instance, to calculate
b + d
, you can easily derive the result asb + d = inv(2 + 4) = inv(1c) = a3
.
Multiplication
In decimal, you have to perform quite a bit of tough learning:
* | 0 1 2 3 4 5 6 7 8 9 * | 0 1 2 3 4 5 6 7 8 9
--+----------------------------- --+-----------------------------
0 | 0 0 0 0 0 0 0 0 0 0 0 |
1 | 0 1 2 3 4 5 6 7 8 9 1 |
2 | 0 2 4 6 8 10 12 14 16 18 2 | 4
3 | 0 3 6 9 12 15 18 21 24 27 3 | 6 9
4 | 0 4 8 12 16 20 24 28 32 36 4 | 8 12 16
5 | 0 5 10 15 20 25 30 35 40 45 5 | 10 15 20 25
6 | 0 6 12 18 24 30 36 42 48 54 6 | 12 18 24 30 36
7 | 0 7 14 21 28 35 42 49 56 63 7 | 14 21 28 35 42 49
8 | 0 8 16 24 32 40 48 56 64 72 8 | 16 24 32 40 48 56 64
9 | 0 9 18 27 36 45 54 63 72 81 9 | 18 27 36 45 54 63 72 81
But in balanced nonal, the table on the right is reduced heavily: The three quadrants on the lower left, the upper right and the upper left all follow from the lower right one via symmetry.
* | d c b a 0 1 2 3 4 * | d c b a 0 1 2 3 4
--+-------------------------- --+--------------------------
d |2b 13 1a 4 0 d a1 ac b2 d |
c |13 10 1c 3 0 c a3 a0 ac c |
b |1a 1c 4 2 0 b d a3 a1 b |
a | 4 3 2 1 0 a b c d a |
0 | 0 0 0 0 0 0 0 0 0 0 |
1 | d c b a 0 1 2 3 4 1 |
2 |a1 a3 d b 0 2 4 1c 1a 2 | 4
3 |ac a0 a3 c 0 3 1c 10 13 3 | 1c 10
4 |b2 ac a1 d 0 4 1a 13 2b 4 | 1a 13 2b
For instance, to calculate
c*d
, you can just doc*d = 3*4 = 13
. Or for2*b
, you derive2*b = inv(2*2) = inv(4) = d
. It's really a piece of cake, once you are used to it.
Taking this all together, you need to learn
for decimal:
0 inversions
45 summations
36 multiplications
Total: 81for balanced nonal:
9 inversions
16 summations
6 multiplications
Total: 31
I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
– Wildcard
2 days ago
3
@Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
– cmaster
2 days ago
1
@Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
– cmaster
2 days ago
@Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
– cmaster
2 days ago
add a comment |
up vote
10
down vote
Toolforger has one thing right: Binary computers are the most efficient computing devices possible. Period. Ternary has no technological advantage, whatsoever.
However, I'm going to give a suggestion of how you can offset the disadvantage of ternary computing, to allow your society to actually use ternary computers instead of binary ones:
Your society has evolved to use a balanced numeral system.
Balanced numeral systems don't just use positive digits like we do, they use an equal number of negative and positive digits. As such, balanced ternary uses three digits for -1, 0, and 1 instead of the unbalanced 0, 1, and 2. This has several beneficial consequences:
Balanced numeral systems have symmetries that unbalanced systems lack. Not only can you exploit commutativity when doing calculations (you know what 2+3 is, so you know what 3+2 is), but also symmetries based on sign: -3-2 = -(3+2), -3*2 = 3*-2, -3*-2 = 3*2, and 3*-2 = -(3*2).
You have more computations with trivial outcome: x+(-x) = 0 and -1*x = -x.
The effect is, that you have much less to learn when learning balanced numeral systems. For instance, unbalanced decimal requires you to learn 81 data points by heart to perform all the four basic computations, whereas balanced nonal (9 digits from -4 to 4) requires only 31 data points, of which only 6 are for multiplication. The right-most column uses `-4 = d, -3 = c, -2 = b, and -1 = a as negative digits:
2*2 = 0*9 +4 = 4
2*3 = 1*9 -3 = 1c
2*4 = 1*9 -1 = 1a
3*3 = 1*9 +0 = 10
3*4 = 1*9 +3 = 13
4*4 = 2*9 -2 = 2b
The entire rest is either trivial or follows from symmetries. That's all the multiplication table your school kids need to learn!
Because you can get both positive and negative carries, you get much less and smaller carries in long additions. They simply tend to cancel each other out.
Because you have negative digits as well as positive ones, negative numbers are just an integral part of the system. In decimal, you have to decide which number is greater when doing a subtraction, then subtract the smaller number from the larger one, then reattach a sign to the result based on which of the two numbers was greater. In balanced systems you don't care which number is greater, you just do the subtraction. Then you look at the result and see whether it's positive or negative...
As a matter of fact, I once learned to use balanced nonal just for fun, and in general, it's indeed much easier to use than decimal.
My point is: To anyone who has been brought up calculating in a balanced numeral system, an unbalanced system would just feel so unimaginable awkward and cumbersome that they will basically think that ternary is the smallest base you can use. Because binary lacks the negative digits, how are you supposed to compute with that? What do you do when you subtract 5 from 2? You absolutely need a -1 for that!
As such, a society of people with a balanced numeral system background may conceivably settle on balanced ternary computers instead of binary ones. And once a chunk of nine balanced ternary digits has been generally accepted as the smallest unit of information exchange, no one will want to use 15 bits (what an awkward number!) to transmit the same amount of information in a binary fashion, with all the losses that would imply.
The result is basically a lock-in effect to balanced ternary that would keep people from using binary hardware.
Aside: Unbalanced decimal vs. balanced nonal
Here is a more detailed comparison between decimal and balanced nonal. I'm using a, b, c, d
as the negative digits -1, -2, -3, -4
here, respectively:
Negation
Here the learing effort for decimal is zero. For balanced nonal, you have to learn the following table with nine entries:
| d c b a 0 1 2 3 4
--------+------------------
inverse | 4 3 2 1 0 a b c d
Addition
Decimal has the following addition table, the right table show the 45 entries that need to be learned:
+ | 0 1 2 3 4 5 6 7 8 9 + | 0 1 2 3 4 5 6 7 8 9
--+----------------------------- --+-----------------------------
0 | 0 1 2 3 4 5 6 7 8 9 0 |
1 | 1 2 3 4 5 6 7 8 9 10 1 | 2
2 | 2 3 4 5 6 7 8 9 10 11 2 | 3 4
3 | 3 4 5 6 7 8 9 10 11 12 3 | 4 5 6
4 | 4 5 6 7 8 9 10 11 12 13 4 | 5 6 7 8
5 | 5 6 7 8 9 10 11 12 13 14 5 | 6 7 8 9 10
6 | 6 7 8 9 10 11 12 13 14 15 6 | 7 8 9 10 11 12
7 | 7 8 9 10 11 12 13 14 15 16 7 | 8 9 10 11 12 13 14
8 | 8 9 10 11 12 13 14 15 16 17 8 | 9 10 11 12 13 14 15 16
9 | 9 10 11 12 13 14 15 16 17 18 9 | 10 11 12 13 14 15 16 17 18
The same table for balanced nonal only has 16 entries that need to be learned:
+ | d c b a 0 1 2 3 4 + | d c b a 0 1 2 3 4
--+-------------------------- --+--------------------------
d |a1 a2 a3 a4 d c b a 0 d |
c |a2 a3 a4 d c b a 0 1 c |
b |a3 a4 d c b a 0 1 2 b |
a |a4 d c b a 0 1 2 3 a |
0 | d c b a 0 1 2 3 4 0 |
1 | c b a 0 1 2 3 4 1d 1 | 2
2 | b a 0 1 2 3 4 1d 1c 2 | 1 3 4
3 | a 0 1 2 3 4 1d 1c 1b 3 | 1 2 4 1d 1c
4 | 0 1 2 3 4 1d 1c 1b 1a 4 | 1 2 3 1d 1c 1b 1a
Note the missing diagonal of zeros (a number plus its inverse is zero), and the missing upper left half (the sum of two numbers is the inverse of the sum of the inverse numbers).
For instance, to calculate
b + d
, you can easily derive the result asb + d = inv(2 + 4) = inv(1c) = a3
.
Multiplication
In decimal, you have to perform quite a bit of tough learning:
* | 0 1 2 3 4 5 6 7 8 9 * | 0 1 2 3 4 5 6 7 8 9
--+----------------------------- --+-----------------------------
0 | 0 0 0 0 0 0 0 0 0 0 0 |
1 | 0 1 2 3 4 5 6 7 8 9 1 |
2 | 0 2 4 6 8 10 12 14 16 18 2 | 4
3 | 0 3 6 9 12 15 18 21 24 27 3 | 6 9
4 | 0 4 8 12 16 20 24 28 32 36 4 | 8 12 16
5 | 0 5 10 15 20 25 30 35 40 45 5 | 10 15 20 25
6 | 0 6 12 18 24 30 36 42 48 54 6 | 12 18 24 30 36
7 | 0 7 14 21 28 35 42 49 56 63 7 | 14 21 28 35 42 49
8 | 0 8 16 24 32 40 48 56 64 72 8 | 16 24 32 40 48 56 64
9 | 0 9 18 27 36 45 54 63 72 81 9 | 18 27 36 45 54 63 72 81
But in balanced nonal, the table on the right is reduced heavily: The three quadrants on the lower left, the upper right and the upper left all follow from the lower right one via symmetry.
* | d c b a 0 1 2 3 4 * | d c b a 0 1 2 3 4
--+-------------------------- --+--------------------------
d |2b 13 1a 4 0 d a1 ac b2 d |
c |13 10 1c 3 0 c a3 a0 ac c |
b |1a 1c 4 2 0 b d a3 a1 b |
a | 4 3 2 1 0 a b c d a |
0 | 0 0 0 0 0 0 0 0 0 0 |
1 | d c b a 0 1 2 3 4 1 |
2 |a1 a3 d b 0 2 4 1c 1a 2 | 4
3 |ac a0 a3 c 0 3 1c 10 13 3 | 1c 10
4 |b2 ac a1 d 0 4 1a 13 2b 4 | 1a 13 2b
For instance, to calculate
c*d
, you can just doc*d = 3*4 = 13
. Or for2*b
, you derive2*b = inv(2*2) = inv(4) = d
. It's really a piece of cake, once you are used to it.
Taking this all together, you need to learn
for decimal:
0 inversions
45 summations
36 multiplications
Total: 81for balanced nonal:
9 inversions
16 summations
6 multiplications
Total: 31
I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
– Wildcard
2 days ago
3
@Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
– cmaster
2 days ago
1
@Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
– cmaster
2 days ago
@Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
– cmaster
2 days ago
add a comment |
up vote
10
down vote
up vote
10
down vote
Toolforger has one thing right: Binary computers are the most efficient computing devices possible. Period. Ternary has no technological advantage, whatsoever.
However, I'm going to give a suggestion of how you can offset the disadvantage of ternary computing, to allow your society to actually use ternary computers instead of binary ones:
Your society has evolved to use a balanced numeral system.
Balanced numeral systems don't just use positive digits like we do, they use an equal number of negative and positive digits. As such, balanced ternary uses three digits for -1, 0, and 1 instead of the unbalanced 0, 1, and 2. This has several beneficial consequences:
Balanced numeral systems have symmetries that unbalanced systems lack. Not only can you exploit commutativity when doing calculations (you know what 2+3 is, so you know what 3+2 is), but also symmetries based on sign: -3-2 = -(3+2), -3*2 = 3*-2, -3*-2 = 3*2, and 3*-2 = -(3*2).
You have more computations with trivial outcome: x+(-x) = 0 and -1*x = -x.
The effect is, that you have much less to learn when learning balanced numeral systems. For instance, unbalanced decimal requires you to learn 81 data points by heart to perform all the four basic computations, whereas balanced nonal (9 digits from -4 to 4) requires only 31 data points, of which only 6 are for multiplication. The right-most column uses `-4 = d, -3 = c, -2 = b, and -1 = a as negative digits:
2*2 = 0*9 +4 = 4
2*3 = 1*9 -3 = 1c
2*4 = 1*9 -1 = 1a
3*3 = 1*9 +0 = 10
3*4 = 1*9 +3 = 13
4*4 = 2*9 -2 = 2b
The entire rest is either trivial or follows from symmetries. That's all the multiplication table your school kids need to learn!
Because you can get both positive and negative carries, you get much less and smaller carries in long additions. They simply tend to cancel each other out.
Because you have negative digits as well as positive ones, negative numbers are just an integral part of the system. In decimal, you have to decide which number is greater when doing a subtraction, then subtract the smaller number from the larger one, then reattach a sign to the result based on which of the two numbers was greater. In balanced systems you don't care which number is greater, you just do the subtraction. Then you look at the result and see whether it's positive or negative...
As a matter of fact, I once learned to use balanced nonal just for fun, and in general, it's indeed much easier to use than decimal.
My point is: To anyone who has been brought up calculating in a balanced numeral system, an unbalanced system would just feel so unimaginable awkward and cumbersome that they will basically think that ternary is the smallest base you can use. Because binary lacks the negative digits, how are you supposed to compute with that? What do you do when you subtract 5 from 2? You absolutely need a -1 for that!
As such, a society of people with a balanced numeral system background may conceivably settle on balanced ternary computers instead of binary ones. And once a chunk of nine balanced ternary digits has been generally accepted as the smallest unit of information exchange, no one will want to use 15 bits (what an awkward number!) to transmit the same amount of information in a binary fashion, with all the losses that would imply.
The result is basically a lock-in effect to balanced ternary that would keep people from using binary hardware.
Aside: Unbalanced decimal vs. balanced nonal
Here is a more detailed comparison between decimal and balanced nonal. I'm using a, b, c, d
as the negative digits -1, -2, -3, -4
here, respectively:
Negation
Here the learing effort for decimal is zero. For balanced nonal, you have to learn the following table with nine entries:
| d c b a 0 1 2 3 4
--------+------------------
inverse | 4 3 2 1 0 a b c d
Addition
Decimal has the following addition table, the right table show the 45 entries that need to be learned:
+ | 0 1 2 3 4 5 6 7 8 9 + | 0 1 2 3 4 5 6 7 8 9
--+----------------------------- --+-----------------------------
0 | 0 1 2 3 4 5 6 7 8 9 0 |
1 | 1 2 3 4 5 6 7 8 9 10 1 | 2
2 | 2 3 4 5 6 7 8 9 10 11 2 | 3 4
3 | 3 4 5 6 7 8 9 10 11 12 3 | 4 5 6
4 | 4 5 6 7 8 9 10 11 12 13 4 | 5 6 7 8
5 | 5 6 7 8 9 10 11 12 13 14 5 | 6 7 8 9 10
6 | 6 7 8 9 10 11 12 13 14 15 6 | 7 8 9 10 11 12
7 | 7 8 9 10 11 12 13 14 15 16 7 | 8 9 10 11 12 13 14
8 | 8 9 10 11 12 13 14 15 16 17 8 | 9 10 11 12 13 14 15 16
9 | 9 10 11 12 13 14 15 16 17 18 9 | 10 11 12 13 14 15 16 17 18
The same table for balanced nonal only has 16 entries that need to be learned:
+ | d c b a 0 1 2 3 4 + | d c b a 0 1 2 3 4
--+-------------------------- --+--------------------------
d |a1 a2 a3 a4 d c b a 0 d |
c |a2 a3 a4 d c b a 0 1 c |
b |a3 a4 d c b a 0 1 2 b |
a |a4 d c b a 0 1 2 3 a |
0 | d c b a 0 1 2 3 4 0 |
1 | c b a 0 1 2 3 4 1d 1 | 2
2 | b a 0 1 2 3 4 1d 1c 2 | 1 3 4
3 | a 0 1 2 3 4 1d 1c 1b 3 | 1 2 4 1d 1c
4 | 0 1 2 3 4 1d 1c 1b 1a 4 | 1 2 3 1d 1c 1b 1a
Note the missing diagonal of zeros (a number plus its inverse is zero), and the missing upper left half (the sum of two numbers is the inverse of the sum of the inverse numbers).
For instance, to calculate
b + d
, you can easily derive the result asb + d = inv(2 + 4) = inv(1c) = a3
.
Multiplication
In decimal, you have to perform quite a bit of tough learning:
* | 0 1 2 3 4 5 6 7 8 9 * | 0 1 2 3 4 5 6 7 8 9
--+----------------------------- --+-----------------------------
0 | 0 0 0 0 0 0 0 0 0 0 0 |
1 | 0 1 2 3 4 5 6 7 8 9 1 |
2 | 0 2 4 6 8 10 12 14 16 18 2 | 4
3 | 0 3 6 9 12 15 18 21 24 27 3 | 6 9
4 | 0 4 8 12 16 20 24 28 32 36 4 | 8 12 16
5 | 0 5 10 15 20 25 30 35 40 45 5 | 10 15 20 25
6 | 0 6 12 18 24 30 36 42 48 54 6 | 12 18 24 30 36
7 | 0 7 14 21 28 35 42 49 56 63 7 | 14 21 28 35 42 49
8 | 0 8 16 24 32 40 48 56 64 72 8 | 16 24 32 40 48 56 64
9 | 0 9 18 27 36 45 54 63 72 81 9 | 18 27 36 45 54 63 72 81
But in balanced nonal, the table on the right is reduced heavily: The three quadrants on the lower left, the upper right and the upper left all follow from the lower right one via symmetry.
* | d c b a 0 1 2 3 4 * | d c b a 0 1 2 3 4
--+-------------------------- --+--------------------------
d |2b 13 1a 4 0 d a1 ac b2 d |
c |13 10 1c 3 0 c a3 a0 ac c |
b |1a 1c 4 2 0 b d a3 a1 b |
a | 4 3 2 1 0 a b c d a |
0 | 0 0 0 0 0 0 0 0 0 0 |
1 | d c b a 0 1 2 3 4 1 |
2 |a1 a3 d b 0 2 4 1c 1a 2 | 4
3 |ac a0 a3 c 0 3 1c 10 13 3 | 1c 10
4 |b2 ac a1 d 0 4 1a 13 2b 4 | 1a 13 2b
For instance, to calculate
c*d
, you can just doc*d = 3*4 = 13
. Or for2*b
, you derive2*b = inv(2*2) = inv(4) = d
. It's really a piece of cake, once you are used to it.
Taking this all together, you need to learn
for decimal:
0 inversions
45 summations
36 multiplications
Total: 81for balanced nonal:
9 inversions
16 summations
6 multiplications
Total: 31
Toolforger has one thing right: Binary computers are the most efficient computing devices possible. Period. Ternary has no technological advantage, whatsoever.
However, I'm going to give a suggestion of how you can offset the disadvantage of ternary computing, to allow your society to actually use ternary computers instead of binary ones:
Your society has evolved to use a balanced numeral system.
Balanced numeral systems don't just use positive digits like we do, they use an equal number of negative and positive digits. As such, balanced ternary uses three digits for -1, 0, and 1 instead of the unbalanced 0, 1, and 2. This has several beneficial consequences:
Balanced numeral systems have symmetries that unbalanced systems lack. Not only can you exploit commutativity when doing calculations (you know what 2+3 is, so you know what 3+2 is), but also symmetries based on sign: -3-2 = -(3+2), -3*2 = 3*-2, -3*-2 = 3*2, and 3*-2 = -(3*2).
You have more computations with trivial outcome: x+(-x) = 0 and -1*x = -x.
The effect is, that you have much less to learn when learning balanced numeral systems. For instance, unbalanced decimal requires you to learn 81 data points by heart to perform all the four basic computations, whereas balanced nonal (9 digits from -4 to 4) requires only 31 data points, of which only 6 are for multiplication. The right-most column uses `-4 = d, -3 = c, -2 = b, and -1 = a as negative digits:
2*2 = 0*9 +4 = 4
2*3 = 1*9 -3 = 1c
2*4 = 1*9 -1 = 1a
3*3 = 1*9 +0 = 10
3*4 = 1*9 +3 = 13
4*4 = 2*9 -2 = 2b
The entire rest is either trivial or follows from symmetries. That's all the multiplication table your school kids need to learn!
Because you can get both positive and negative carries, you get much less and smaller carries in long additions. They simply tend to cancel each other out.
Because you have negative digits as well as positive ones, negative numbers are just an integral part of the system. In decimal, you have to decide which number is greater when doing a subtraction, then subtract the smaller number from the larger one, then reattach a sign to the result based on which of the two numbers was greater. In balanced systems you don't care which number is greater, you just do the subtraction. Then you look at the result and see whether it's positive or negative...
As a matter of fact, I once learned to use balanced nonal just for fun, and in general, it's indeed much easier to use than decimal.
My point is: To anyone who has been brought up calculating in a balanced numeral system, an unbalanced system would just feel so unimaginable awkward and cumbersome that they will basically think that ternary is the smallest base you can use. Because binary lacks the negative digits, how are you supposed to compute with that? What do you do when you subtract 5 from 2? You absolutely need a -1 for that!
As such, a society of people with a balanced numeral system background may conceivably settle on balanced ternary computers instead of binary ones. And once a chunk of nine balanced ternary digits has been generally accepted as the smallest unit of information exchange, no one will want to use 15 bits (what an awkward number!) to transmit the same amount of information in a binary fashion, with all the losses that would imply.
The result is basically a lock-in effect to balanced ternary that would keep people from using binary hardware.
Aside: Unbalanced decimal vs. balanced nonal
Here is a more detailed comparison between decimal and balanced nonal. I'm using a, b, c, d
as the negative digits -1, -2, -3, -4
here, respectively:
Negation
Here the learing effort for decimal is zero. For balanced nonal, you have to learn the following table with nine entries:
| d c b a 0 1 2 3 4
--------+------------------
inverse | 4 3 2 1 0 a b c d
Addition
Decimal has the following addition table, the right table show the 45 entries that need to be learned:
+ | 0 1 2 3 4 5 6 7 8 9 + | 0 1 2 3 4 5 6 7 8 9
--+----------------------------- --+-----------------------------
0 | 0 1 2 3 4 5 6 7 8 9 0 |
1 | 1 2 3 4 5 6 7 8 9 10 1 | 2
2 | 2 3 4 5 6 7 8 9 10 11 2 | 3 4
3 | 3 4 5 6 7 8 9 10 11 12 3 | 4 5 6
4 | 4 5 6 7 8 9 10 11 12 13 4 | 5 6 7 8
5 | 5 6 7 8 9 10 11 12 13 14 5 | 6 7 8 9 10
6 | 6 7 8 9 10 11 12 13 14 15 6 | 7 8 9 10 11 12
7 | 7 8 9 10 11 12 13 14 15 16 7 | 8 9 10 11 12 13 14
8 | 8 9 10 11 12 13 14 15 16 17 8 | 9 10 11 12 13 14 15 16
9 | 9 10 11 12 13 14 15 16 17 18 9 | 10 11 12 13 14 15 16 17 18
The same table for balanced nonal only has 16 entries that need to be learned:
+ | d c b a 0 1 2 3 4 + | d c b a 0 1 2 3 4
--+-------------------------- --+--------------------------
d |a1 a2 a3 a4 d c b a 0 d |
c |a2 a3 a4 d c b a 0 1 c |
b |a3 a4 d c b a 0 1 2 b |
a |a4 d c b a 0 1 2 3 a |
0 | d c b a 0 1 2 3 4 0 |
1 | c b a 0 1 2 3 4 1d 1 | 2
2 | b a 0 1 2 3 4 1d 1c 2 | 1 3 4
3 | a 0 1 2 3 4 1d 1c 1b 3 | 1 2 4 1d 1c
4 | 0 1 2 3 4 1d 1c 1b 1a 4 | 1 2 3 1d 1c 1b 1a
Note the missing diagonal of zeros (a number plus its inverse is zero), and the missing upper left half (the sum of two numbers is the inverse of the sum of the inverse numbers).
For instance, to calculate
b + d
, you can easily derive the result asb + d = inv(2 + 4) = inv(1c) = a3
.
Multiplication
In decimal, you have to perform quite a bit of tough learning:
* | 0 1 2 3 4 5 6 7 8 9 * | 0 1 2 3 4 5 6 7 8 9
--+----------------------------- --+-----------------------------
0 | 0 0 0 0 0 0 0 0 0 0 0 |
1 | 0 1 2 3 4 5 6 7 8 9 1 |
2 | 0 2 4 6 8 10 12 14 16 18 2 | 4
3 | 0 3 6 9 12 15 18 21 24 27 3 | 6 9
4 | 0 4 8 12 16 20 24 28 32 36 4 | 8 12 16
5 | 0 5 10 15 20 25 30 35 40 45 5 | 10 15 20 25
6 | 0 6 12 18 24 30 36 42 48 54 6 | 12 18 24 30 36
7 | 0 7 14 21 28 35 42 49 56 63 7 | 14 21 28 35 42 49
8 | 0 8 16 24 32 40 48 56 64 72 8 | 16 24 32 40 48 56 64
9 | 0 9 18 27 36 45 54 63 72 81 9 | 18 27 36 45 54 63 72 81
But in balanced nonal, the table on the right is reduced heavily: The three quadrants on the lower left, the upper right and the upper left all follow from the lower right one via symmetry.
* | d c b a 0 1 2 3 4 * | d c b a 0 1 2 3 4
--+-------------------------- --+--------------------------
d |2b 13 1a 4 0 d a1 ac b2 d |
c |13 10 1c 3 0 c a3 a0 ac c |
b |1a 1c 4 2 0 b d a3 a1 b |
a | 4 3 2 1 0 a b c d a |
0 | 0 0 0 0 0 0 0 0 0 0 |
1 | d c b a 0 1 2 3 4 1 |
2 |a1 a3 d b 0 2 4 1c 1a 2 | 4
3 |ac a0 a3 c 0 3 1c 10 13 3 | 1c 10
4 |b2 ac a1 d 0 4 1a 13 2b 4 | 1a 13 2b
For instance, to calculate
c*d
, you can just doc*d = 3*4 = 13
. Or for2*b
, you derive2*b = inv(2*2) = inv(4) = d
. It's really a piece of cake, once you are used to it.
Taking this all together, you need to learn
for decimal:
0 inversions
45 summations
36 multiplications
Total: 81for balanced nonal:
9 inversions
16 summations
6 multiplications
Total: 31
edited yesterday
answered 2 days ago
cmaster
2,361413
2,361413
I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
– Wildcard
2 days ago
3
@Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
– cmaster
2 days ago
1
@Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
– cmaster
2 days ago
@Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
– cmaster
2 days ago
add a comment |
I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
– Wildcard
2 days ago
3
@Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
– cmaster
2 days ago
1
@Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
– cmaster
2 days ago
@Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
– cmaster
2 days ago
I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
– Wildcard
2 days ago
I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
– Wildcard
2 days ago
3
3
@Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
– cmaster
2 days ago
@Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
– cmaster
2 days ago
1
1
@Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
– cmaster
2 days ago
@Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
– cmaster
2 days ago
@Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
– cmaster
2 days ago
@Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
– cmaster
2 days ago
add a comment |
up vote
9
down vote
Base-4
This might be a natural choice for a society that perfected digital communication before digital computation.
Digital signals are often transmitted (i.e., “passed through the analogue world”) using quadrature phase-shift keying, a special form of quadrature amplitude modulation. This is generally more performant and reliable than simple amplitude modulation, and more efficient than frequency modulation.
QPSK / QAM by default use four different states, or a multiple of four, as the fundamental unit of information. We usually interpret this as “it always transmits two bits at a time”, but if this method were to be standard before binary computers, we'd probably be used to measure information in quats (?) rather than bits.
Ultimately, the computers would at the lowest level probably end up looking a lot like our binary ones, but with usually two bits paired together to a fundamental “4-logical unit”. Unlike binary-coded decimal, this doesn't incur any overhead of unused binary states.
And it could actually make sense to QPSK-encode even the local communication between processor and memory etc. – wireless transmission everywhere!, thus making the components “base-4 for all that can be seen”.
What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
– leftaroundabout
Nov 27 at 13:59
Re your comment: biological computers, perhaps?
– Wildcard
2 days ago
QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
– endolith
yesterday
1
@endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
– leftaroundabout
yesterday
add a comment |
up vote
9
down vote
Base-4
This might be a natural choice for a society that perfected digital communication before digital computation.
Digital signals are often transmitted (i.e., “passed through the analogue world”) using quadrature phase-shift keying, a special form of quadrature amplitude modulation. This is generally more performant and reliable than simple amplitude modulation, and more efficient than frequency modulation.
QPSK / QAM by default use four different states, or a multiple of four, as the fundamental unit of information. We usually interpret this as “it always transmits two bits at a time”, but if this method were to be standard before binary computers, we'd probably be used to measure information in quats (?) rather than bits.
Ultimately, the computers would at the lowest level probably end up looking a lot like our binary ones, but with usually two bits paired together to a fundamental “4-logical unit”. Unlike binary-coded decimal, this doesn't incur any overhead of unused binary states.
And it could actually make sense to QPSK-encode even the local communication between processor and memory etc. – wireless transmission everywhere!, thus making the components “base-4 for all that can be seen”.
What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
– leftaroundabout
Nov 27 at 13:59
Re your comment: biological computers, perhaps?
– Wildcard
2 days ago
QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
– endolith
yesterday
1
@endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
– leftaroundabout
yesterday
add a comment |
up vote
9
down vote
up vote
9
down vote
Base-4
This might be a natural choice for a society that perfected digital communication before digital computation.
Digital signals are often transmitted (i.e., “passed through the analogue world”) using quadrature phase-shift keying, a special form of quadrature amplitude modulation. This is generally more performant and reliable than simple amplitude modulation, and more efficient than frequency modulation.
QPSK / QAM by default use four different states, or a multiple of four, as the fundamental unit of information. We usually interpret this as “it always transmits two bits at a time”, but if this method were to be standard before binary computers, we'd probably be used to measure information in quats (?) rather than bits.
Ultimately, the computers would at the lowest level probably end up looking a lot like our binary ones, but with usually two bits paired together to a fundamental “4-logical unit”. Unlike binary-coded decimal, this doesn't incur any overhead of unused binary states.
And it could actually make sense to QPSK-encode even the local communication between processor and memory etc. – wireless transmission everywhere!, thus making the components “base-4 for all that can be seen”.
Base-4
This might be a natural choice for a society that perfected digital communication before digital computation.
Digital signals are often transmitted (i.e., “passed through the analogue world”) using quadrature phase-shift keying, a special form of quadrature amplitude modulation. This is generally more performant and reliable than simple amplitude modulation, and more efficient than frequency modulation.
QPSK / QAM by default use four different states, or a multiple of four, as the fundamental unit of information. We usually interpret this as “it always transmits two bits at a time”, but if this method were to be standard before binary computers, we'd probably be used to measure information in quats (?) rather than bits.
Ultimately, the computers would at the lowest level probably end up looking a lot like our binary ones, but with usually two bits paired together to a fundamental “4-logical unit”. Unlike binary-coded decimal, this doesn't incur any overhead of unused binary states.
And it could actually make sense to QPSK-encode even the local communication between processor and memory etc. – wireless transmission everywhere!, thus making the components “base-4 for all that can be seen”.
edited Nov 27 at 14:01
answered Nov 27 at 13:48
leftaroundabout
656510
656510
What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
– leftaroundabout
Nov 27 at 13:59
Re your comment: biological computers, perhaps?
– Wildcard
2 days ago
QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
– endolith
yesterday
1
@endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
– leftaroundabout
yesterday
add a comment |
What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
– leftaroundabout
Nov 27 at 13:59
Re your comment: biological computers, perhaps?
– Wildcard
2 days ago
QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
– endolith
yesterday
1
@endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
– leftaroundabout
yesterday
What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
– leftaroundabout
Nov 27 at 13:59
What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
– leftaroundabout
Nov 27 at 13:59
Re your comment: biological computers, perhaps?
– Wildcard
2 days ago
Re your comment: biological computers, perhaps?
– Wildcard
2 days ago
QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
– endolith
yesterday
QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
– endolith
yesterday
1
1
@endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
– leftaroundabout
yesterday
@endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
– leftaroundabout
yesterday
add a comment |
up vote
6
down vote
It's almost completely irrelevant.
The binary nature of computers is very very very rarely relevant in practice. Just about the only practical situation where the binary nature of computers is relevant is when doing sophisticated error bounds analysis of floating point calculations.
In actual reality, there are quite a few aspects of modern computing which do not even rely on binary representations. For example, we all like SSDs, don't we? Well, modern cheap consumer SSDs are not binary devices -- they use multi-level cells as their fundamental building blocks. For another example, we all like Gigabit Ethernet, don't we? Well, the unit of transmission in Gigabit Ethernet is an indivisible 8-bit octet (transmitted as a 10-bit symbol, but hey, who counts).
No modern computer (all right, hardly any modern computer) can access one bit of storage individually. Most usually, the smallest accessible unit is an octet of eight bits, which can be seen as an indivisible atom with 256 possible values. (And even this is not really true; what exactly is the atomic unit of memory access varies from architecture to architecture. Access to one individual bit is not atomic on any computer I know of.)
Donald Knuth's Art of Computer Programming, which the closest thing we have to a fundamental text in informatics, famously uses the fictional MIX computer for practical examples -- and one of the charming characteristics of MIX is that one does not know whether it's a binary of a decimal computer.
What actually matters is that modern computers are digital -- in the computer, everything is a number. That the numbers in question are represented by tuples of octets is a detail which very rarely has any practical or even theoretical importance.
I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
– pipe
Nov 27 at 9:58
@pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
– AlexP
Nov 27 at 11:12
1
Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
– pipe
Nov 27 at 12:16
@pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
– AlexP
Nov 27 at 12:23
add a comment |
up vote
6
down vote
It's almost completely irrelevant.
The binary nature of computers is very very very rarely relevant in practice. Just about the only practical situation where the binary nature of computers is relevant is when doing sophisticated error bounds analysis of floating point calculations.
In actual reality, there are quite a few aspects of modern computing which do not even rely on binary representations. For example, we all like SSDs, don't we? Well, modern cheap consumer SSDs are not binary devices -- they use multi-level cells as their fundamental building blocks. For another example, we all like Gigabit Ethernet, don't we? Well, the unit of transmission in Gigabit Ethernet is an indivisible 8-bit octet (transmitted as a 10-bit symbol, but hey, who counts).
No modern computer (all right, hardly any modern computer) can access one bit of storage individually. Most usually, the smallest accessible unit is an octet of eight bits, which can be seen as an indivisible atom with 256 possible values. (And even this is not really true; what exactly is the atomic unit of memory access varies from architecture to architecture. Access to one individual bit is not atomic on any computer I know of.)
Donald Knuth's Art of Computer Programming, which the closest thing we have to a fundamental text in informatics, famously uses the fictional MIX computer for practical examples -- and one of the charming characteristics of MIX is that one does not know whether it's a binary of a decimal computer.
What actually matters is that modern computers are digital -- in the computer, everything is a number. That the numbers in question are represented by tuples of octets is a detail which very rarely has any practical or even theoretical importance.
I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
– pipe
Nov 27 at 9:58
@pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
– AlexP
Nov 27 at 11:12
1
Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
– pipe
Nov 27 at 12:16
@pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
– AlexP
Nov 27 at 12:23
add a comment |
up vote
6
down vote
up vote
6
down vote
It's almost completely irrelevant.
The binary nature of computers is very very very rarely relevant in practice. Just about the only practical situation where the binary nature of computers is relevant is when doing sophisticated error bounds analysis of floating point calculations.
In actual reality, there are quite a few aspects of modern computing which do not even rely on binary representations. For example, we all like SSDs, don't we? Well, modern cheap consumer SSDs are not binary devices -- they use multi-level cells as their fundamental building blocks. For another example, we all like Gigabit Ethernet, don't we? Well, the unit of transmission in Gigabit Ethernet is an indivisible 8-bit octet (transmitted as a 10-bit symbol, but hey, who counts).
No modern computer (all right, hardly any modern computer) can access one bit of storage individually. Most usually, the smallest accessible unit is an octet of eight bits, which can be seen as an indivisible atom with 256 possible values. (And even this is not really true; what exactly is the atomic unit of memory access varies from architecture to architecture. Access to one individual bit is not atomic on any computer I know of.)
Donald Knuth's Art of Computer Programming, which the closest thing we have to a fundamental text in informatics, famously uses the fictional MIX computer for practical examples -- and one of the charming characteristics of MIX is that one does not know whether it's a binary of a decimal computer.
What actually matters is that modern computers are digital -- in the computer, everything is a number. That the numbers in question are represented by tuples of octets is a detail which very rarely has any practical or even theoretical importance.
It's almost completely irrelevant.
The binary nature of computers is very very very rarely relevant in practice. Just about the only practical situation where the binary nature of computers is relevant is when doing sophisticated error bounds analysis of floating point calculations.
In actual reality, there are quite a few aspects of modern computing which do not even rely on binary representations. For example, we all like SSDs, don't we? Well, modern cheap consumer SSDs are not binary devices -- they use multi-level cells as their fundamental building blocks. For another example, we all like Gigabit Ethernet, don't we? Well, the unit of transmission in Gigabit Ethernet is an indivisible 8-bit octet (transmitted as a 10-bit symbol, but hey, who counts).
No modern computer (all right, hardly any modern computer) can access one bit of storage individually. Most usually, the smallest accessible unit is an octet of eight bits, which can be seen as an indivisible atom with 256 possible values. (And even this is not really true; what exactly is the atomic unit of memory access varies from architecture to architecture. Access to one individual bit is not atomic on any computer I know of.)
Donald Knuth's Art of Computer Programming, which the closest thing we have to a fundamental text in informatics, famously uses the fictional MIX computer for practical examples -- and one of the charming characteristics of MIX is that one does not know whether it's a binary of a decimal computer.
What actually matters is that modern computers are digital -- in the computer, everything is a number. That the numbers in question are represented by tuples of octets is a detail which very rarely has any practical or even theoretical importance.
answered Nov 27 at 5:53
AlexP
34.6k778133
34.6k778133
I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
– pipe
Nov 27 at 9:58
@pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
– AlexP
Nov 27 at 11:12
1
Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
– pipe
Nov 27 at 12:16
@pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
– AlexP
Nov 27 at 12:23
add a comment |
I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
– pipe
Nov 27 at 9:58
@pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
– AlexP
Nov 27 at 11:12
1
Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
– pipe
Nov 27 at 12:16
@pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
– AlexP
Nov 27 at 12:23
I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
– pipe
Nov 27 at 9:58
I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
– pipe
Nov 27 at 9:58
@pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
– AlexP
Nov 27 at 11:12
@pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
– AlexP
Nov 27 at 11:12
1
1
Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
– pipe
Nov 27 at 12:16
Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
– pipe
Nov 27 at 12:16
@pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
– AlexP
Nov 27 at 12:23
@pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
– AlexP
Nov 27 at 12:23
add a comment |
up vote
5
down vote
Get rid of George Boole, inventor of Boolean Algebra, probably the main mathematical foundation of computer logic.
Without Boolean Algebra, regular algebra would give quite an edge to decimal computers, even if you needed three to four times as much hardware per digit.
There's no need to kill him, just have something happen that stops his research or get him interested in another field instead.
Comments are not for extended discussion; this conversation has been moved to chat.
– L.Dutch♦
2 days ago
add a comment |
up vote
5
down vote
Get rid of George Boole, inventor of Boolean Algebra, probably the main mathematical foundation of computer logic.
Without Boolean Algebra, regular algebra would give quite an edge to decimal computers, even if you needed three to four times as much hardware per digit.
There's no need to kill him, just have something happen that stops his research or get him interested in another field instead.
Comments are not for extended discussion; this conversation has been moved to chat.
– L.Dutch♦
2 days ago
add a comment |
up vote
5
down vote
up vote
5
down vote
Get rid of George Boole, inventor of Boolean Algebra, probably the main mathematical foundation of computer logic.
Without Boolean Algebra, regular algebra would give quite an edge to decimal computers, even if you needed three to four times as much hardware per digit.
There's no need to kill him, just have something happen that stops his research or get him interested in another field instead.
Get rid of George Boole, inventor of Boolean Algebra, probably the main mathematical foundation of computer logic.
Without Boolean Algebra, regular algebra would give quite an edge to decimal computers, even if you needed three to four times as much hardware per digit.
There's no need to kill him, just have something happen that stops his research or get him interested in another field instead.
answered Nov 26 at 14:29
Emilio M Bumachar
4,3581121
4,3581121
Comments are not for extended discussion; this conversation has been moved to chat.
– L.Dutch♦
2 days ago
add a comment |
Comments are not for extended discussion; this conversation has been moved to chat.
– L.Dutch♦
2 days ago
Comments are not for extended discussion; this conversation has been moved to chat.
– L.Dutch♦
2 days ago
Comments are not for extended discussion; this conversation has been moved to chat.
– L.Dutch♦
2 days ago
add a comment |
up vote
5
down vote
EDIT - On reading the answer by L.Dutch, I see that there is an energy-saving argument for using trinary. I'd be interested to find out how theoretically true that is. Crucially the OP talks about transistors rather than thermionic valves and that could make a difference. There are also other energy questions to address other than the simple switching of a transistor. It would be good to know the extent of this saving and any extra cost associated with building and maintaining the hardware. Heat dissipation may also be an issue.
I remain open-minded as well as interested in this approach.
I don't think there is a historical justification for your premise as far as transistors are concerned so instead, I will just say:
The minimum historical change is No Electronics
It's possible to use other bases but just a really bad idea.
IBM 1620 Model I, Level H
IBM 1620 data processing machine with IBM 1627 plotter, on display at
the 1962 Seattle World's Fair The IBM 1620 was announced by IBM on
October 21, 1959,[1] and marketed as an inexpensive "scientific
computer".[2] After a total production of about two thousand machines,
it was withdrawn on November 19, 1970. Modified versions of the 1620
were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process
Control Systems (making it the first digital computer considered
reliable enough for real-time process control of factory
equipment)[citation needed].
Being variable word length decimal, as opposed to
fixed-word-length pure binary, made it an especially attractive first
computer to learn on – and hundreds of thousands of students had their
first experiences with a computer on the IBM 1620.
https://en.wikipedia.org/wiki/IBM_1620
The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level.
Reasoning
Any other electronic system than binary will soon evolve into binary because it depends on digital electronics.
It is commonly supposed, by those not in the know, that zero voltage represent a binary zero and some arbitrary voltage, e.g. 5 volts, represents a 1. However in the real world these voltages are never so precise. It is much easier to have two ranges with a specified changeover point.
Having to maintain say ten different voltages for ten different digits would be incredibly expensive to make, unreliable and not worth the effort.
So your minimum historical change is No Electronics.
3
This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
– RonJohn
Nov 26 at 14:29
2
@RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
– chasly from UK
Nov 26 at 14:38
The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
– alephzero
2 days ago
@alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
– chasly from UK
2 days ago
add a comment |
up vote
5
down vote
EDIT - On reading the answer by L.Dutch, I see that there is an energy-saving argument for using trinary. I'd be interested to find out how theoretically true that is. Crucially the OP talks about transistors rather than thermionic valves and that could make a difference. There are also other energy questions to address other than the simple switching of a transistor. It would be good to know the extent of this saving and any extra cost associated with building and maintaining the hardware. Heat dissipation may also be an issue.
I remain open-minded as well as interested in this approach.
I don't think there is a historical justification for your premise as far as transistors are concerned so instead, I will just say:
The minimum historical change is No Electronics
It's possible to use other bases but just a really bad idea.
IBM 1620 Model I, Level H
IBM 1620 data processing machine with IBM 1627 plotter, on display at
the 1962 Seattle World's Fair The IBM 1620 was announced by IBM on
October 21, 1959,[1] and marketed as an inexpensive "scientific
computer".[2] After a total production of about two thousand machines,
it was withdrawn on November 19, 1970. Modified versions of the 1620
were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process
Control Systems (making it the first digital computer considered
reliable enough for real-time process control of factory
equipment)[citation needed].
Being variable word length decimal, as opposed to
fixed-word-length pure binary, made it an especially attractive first
computer to learn on – and hundreds of thousands of students had their
first experiences with a computer on the IBM 1620.
https://en.wikipedia.org/wiki/IBM_1620
The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level.
Reasoning
Any other electronic system than binary will soon evolve into binary because it depends on digital electronics.
It is commonly supposed, by those not in the know, that zero voltage represent a binary zero and some arbitrary voltage, e.g. 5 volts, represents a 1. However in the real world these voltages are never so precise. It is much easier to have two ranges with a specified changeover point.
Having to maintain say ten different voltages for ten different digits would be incredibly expensive to make, unreliable and not worth the effort.
So your minimum historical change is No Electronics.
3
This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
– RonJohn
Nov 26 at 14:29
2
@RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
– chasly from UK
Nov 26 at 14:38
The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
– alephzero
2 days ago
@alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
– chasly from UK
2 days ago
add a comment |
up vote
5
down vote
up vote
5
down vote
EDIT - On reading the answer by L.Dutch, I see that there is an energy-saving argument for using trinary. I'd be interested to find out how theoretically true that is. Crucially the OP talks about transistors rather than thermionic valves and that could make a difference. There are also other energy questions to address other than the simple switching of a transistor. It would be good to know the extent of this saving and any extra cost associated with building and maintaining the hardware. Heat dissipation may also be an issue.
I remain open-minded as well as interested in this approach.
I don't think there is a historical justification for your premise as far as transistors are concerned so instead, I will just say:
The minimum historical change is No Electronics
It's possible to use other bases but just a really bad idea.
IBM 1620 Model I, Level H
IBM 1620 data processing machine with IBM 1627 plotter, on display at
the 1962 Seattle World's Fair The IBM 1620 was announced by IBM on
October 21, 1959,[1] and marketed as an inexpensive "scientific
computer".[2] After a total production of about two thousand machines,
it was withdrawn on November 19, 1970. Modified versions of the 1620
were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process
Control Systems (making it the first digital computer considered
reliable enough for real-time process control of factory
equipment)[citation needed].
Being variable word length decimal, as opposed to
fixed-word-length pure binary, made it an especially attractive first
computer to learn on – and hundreds of thousands of students had their
first experiences with a computer on the IBM 1620.
https://en.wikipedia.org/wiki/IBM_1620
The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level.
Reasoning
Any other electronic system than binary will soon evolve into binary because it depends on digital electronics.
It is commonly supposed, by those not in the know, that zero voltage represent a binary zero and some arbitrary voltage, e.g. 5 volts, represents a 1. However in the real world these voltages are never so precise. It is much easier to have two ranges with a specified changeover point.
Having to maintain say ten different voltages for ten different digits would be incredibly expensive to make, unreliable and not worth the effort.
So your minimum historical change is No Electronics.
EDIT - On reading the answer by L.Dutch, I see that there is an energy-saving argument for using trinary. I'd be interested to find out how theoretically true that is. Crucially the OP talks about transistors rather than thermionic valves and that could make a difference. There are also other energy questions to address other than the simple switching of a transistor. It would be good to know the extent of this saving and any extra cost associated with building and maintaining the hardware. Heat dissipation may also be an issue.
I remain open-minded as well as interested in this approach.
I don't think there is a historical justification for your premise as far as transistors are concerned so instead, I will just say:
The minimum historical change is No Electronics
It's possible to use other bases but just a really bad idea.
IBM 1620 Model I, Level H
IBM 1620 data processing machine with IBM 1627 plotter, on display at
the 1962 Seattle World's Fair The IBM 1620 was announced by IBM on
October 21, 1959,[1] and marketed as an inexpensive "scientific
computer".[2] After a total production of about two thousand machines,
it was withdrawn on November 19, 1970. Modified versions of the 1620
were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process
Control Systems (making it the first digital computer considered
reliable enough for real-time process control of factory
equipment)[citation needed].
Being variable word length decimal, as opposed to
fixed-word-length pure binary, made it an especially attractive first
computer to learn on – and hundreds of thousands of students had their
first experiences with a computer on the IBM 1620.
https://en.wikipedia.org/wiki/IBM_1620
The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level.
Reasoning
Any other electronic system than binary will soon evolve into binary because it depends on digital electronics.
It is commonly supposed, by those not in the know, that zero voltage represent a binary zero and some arbitrary voltage, e.g. 5 volts, represents a 1. However in the real world these voltages are never so precise. It is much easier to have two ranges with a specified changeover point.
Having to maintain say ten different voltages for ten different digits would be incredibly expensive to make, unreliable and not worth the effort.
So your minimum historical change is No Electronics.
edited Nov 26 at 14:40
answered Nov 26 at 13:34
chasly from UK
8,55934086
8,55934086
3
This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
– RonJohn
Nov 26 at 14:29
2
@RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
– chasly from UK
Nov 26 at 14:38
The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
– alephzero
2 days ago
@alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
– chasly from UK
2 days ago
add a comment |
3
This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
– RonJohn
Nov 26 at 14:29
2
@RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
– chasly from UK
Nov 26 at 14:38
The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
– alephzero
2 days ago
@alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
– chasly from UK
2 days ago
3
3
This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
– RonJohn
Nov 26 at 14:29
This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
– RonJohn
Nov 26 at 14:29
2
2
@RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
– chasly from UK
Nov 26 at 14:38
@RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
– chasly from UK
Nov 26 at 14:38
The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
– alephzero
2 days ago
The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
– alephzero
2 days ago
@alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
– chasly from UK
2 days ago
@alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
– chasly from UK
2 days ago
add a comment |
up vote
4
down vote
As I understand it, early tribes used base 12 and it's a lot more flexible than 10--they had a way to count to 12 by counting knuckles to get up to 60 on two hands pretty easily which is the basis of our "Degrees".
10-finger-counters supposedly defeated the base 12ers but kept their time system and degree-based trigonometry.
If the base 12ers had won, a three-state computer might have made a LOT more sense (Binary might have actually looked silly). In this case A byte would probably be 8 tri-state bits (let's call it 8/3) which would perfectly fit 2 base-12 digits instead of our 8/2 layout which always had a bit of a mis-match.
We tried to cope with our mismatch by using BCD and throwing away 6 states from each nibble (1/2 byte) for a more close approximation of base 10 which gave us a "Pure" math without all these weird binary oddities you get (like how in base 10, 1 byte holds 256 states, 2 bytes hold 65536, etc)
With 3/8, base 12ers would have no mismatch, it would be really clean. Round 3-bit numbers would often look like nice base12 numbers: 1 byte would hold 100 states, and 2 bytes would hold 10000, etc.
So can you change the numeric base of your book? Shouldn't come up too often :) It would be fun to even number pages in base 12... complete submersion.
Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
– Jasper
2 days ago
add a comment |
up vote
4
down vote
As I understand it, early tribes used base 12 and it's a lot more flexible than 10--they had a way to count to 12 by counting knuckles to get up to 60 on two hands pretty easily which is the basis of our "Degrees".
10-finger-counters supposedly defeated the base 12ers but kept their time system and degree-based trigonometry.
If the base 12ers had won, a three-state computer might have made a LOT more sense (Binary might have actually looked silly). In this case A byte would probably be 8 tri-state bits (let's call it 8/3) which would perfectly fit 2 base-12 digits instead of our 8/2 layout which always had a bit of a mis-match.
We tried to cope with our mismatch by using BCD and throwing away 6 states from each nibble (1/2 byte) for a more close approximation of base 10 which gave us a "Pure" math without all these weird binary oddities you get (like how in base 10, 1 byte holds 256 states, 2 bytes hold 65536, etc)
With 3/8, base 12ers would have no mismatch, it would be really clean. Round 3-bit numbers would often look like nice base12 numbers: 1 byte would hold 100 states, and 2 bytes would hold 10000, etc.
So can you change the numeric base of your book? Shouldn't come up too often :) It would be fun to even number pages in base 12... complete submersion.
Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
– Jasper
2 days ago
add a comment |
up vote
4
down vote
up vote
4
down vote
As I understand it, early tribes used base 12 and it's a lot more flexible than 10--they had a way to count to 12 by counting knuckles to get up to 60 on two hands pretty easily which is the basis of our "Degrees".
10-finger-counters supposedly defeated the base 12ers but kept their time system and degree-based trigonometry.
If the base 12ers had won, a three-state computer might have made a LOT more sense (Binary might have actually looked silly). In this case A byte would probably be 8 tri-state bits (let's call it 8/3) which would perfectly fit 2 base-12 digits instead of our 8/2 layout which always had a bit of a mis-match.
We tried to cope with our mismatch by using BCD and throwing away 6 states from each nibble (1/2 byte) for a more close approximation of base 10 which gave us a "Pure" math without all these weird binary oddities you get (like how in base 10, 1 byte holds 256 states, 2 bytes hold 65536, etc)
With 3/8, base 12ers would have no mismatch, it would be really clean. Round 3-bit numbers would often look like nice base12 numbers: 1 byte would hold 100 states, and 2 bytes would hold 10000, etc.
So can you change the numeric base of your book? Shouldn't come up too often :) It would be fun to even number pages in base 12... complete submersion.
As I understand it, early tribes used base 12 and it's a lot more flexible than 10--they had a way to count to 12 by counting knuckles to get up to 60 on two hands pretty easily which is the basis of our "Degrees".
10-finger-counters supposedly defeated the base 12ers but kept their time system and degree-based trigonometry.
If the base 12ers had won, a three-state computer might have made a LOT more sense (Binary might have actually looked silly). In this case A byte would probably be 8 tri-state bits (let's call it 8/3) which would perfectly fit 2 base-12 digits instead of our 8/2 layout which always had a bit of a mis-match.
We tried to cope with our mismatch by using BCD and throwing away 6 states from each nibble (1/2 byte) for a more close approximation of base 10 which gave us a "Pure" math without all these weird binary oddities you get (like how in base 10, 1 byte holds 256 states, 2 bytes hold 65536, etc)
With 3/8, base 12ers would have no mismatch, it would be really clean. Round 3-bit numbers would often look like nice base12 numbers: 1 byte would hold 100 states, and 2 bytes would hold 10000, etc.
So can you change the numeric base of your book? Shouldn't come up too often :) It would be fun to even number pages in base 12... complete submersion.
answered Nov 26 at 19:22
Bill K
90557
90557
Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
– Jasper
2 days ago
add a comment |
Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
– Jasper
2 days ago
Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
– Jasper
2 days ago
Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
– Jasper
2 days ago
add a comment |
up vote
3
down vote
Decimal computers.
Modern computers are, indeed, binary. Binary is the classification of an electrical signal as occupying one of two states, conditional on the voltage. For the sake of simplicity, you could say that in a 5V system, anything signal above 4V is a '1' and everything else is a '0'. Once a signal has been confined to two states, it's pretty easy to apply Boolean math, which was already well-explored ahead of computers. Binary was an easy choice for computers because so much work was already done in the area of Boolean algebra.
When we needed to increase the range of numbers, we added more signals. Two signals (two bits) could represent 4 distinct values. 3 bits - 8 values, and so-on. But what if, instead of adding more signals to expand our values, we simply divided the existing signals up more. In a 5V system, one signal could represent a number from 1-10 if we divide up the voltage. 0-0.25 volts = 0. 0.25-0.50 volts = 1. 0.50-0.75 volts = 2, etc. In theory, each signal would carry 5x the data a binary signal could. But why stop there? Why not split each signal into 100 distinct values?
Well, for the same reason we never went further than binary - environmental interference and lack of precision components. You need to be able to precisely measure the voltages to determine the value, and if those voltages change, your system becomes unreliable. All types of factors can affect electrical voltages, RF, temperature, humidity, metal density, etc. As components age, their tolerances tend to decrease.
Any number of things could have changed this - if you use a different medium - light, for example, interference isn't a concern. This is exactly why fiber-optics can carry so much more data than electrical connections.
The discovery of a room-temperature superconductor could also have allowed different computers to become standard. A superconductor doesn't lose electrons to heat. This means you could pump more voltage through a system without fear of overheating, requiring less precise components and less (no) cooling.
So, in-short, binary computers dominate because of physical limitations related to electricity and the the wealth of knowledge (Boolean Algebra) that was already available when vacuum tubes, transistors, and semiconductors came about. Change any of those factors, and binary computers may never have been.
New contributor
add a comment |
up vote
3
down vote
Decimal computers.
Modern computers are, indeed, binary. Binary is the classification of an electrical signal as occupying one of two states, conditional on the voltage. For the sake of simplicity, you could say that in a 5V system, anything signal above 4V is a '1' and everything else is a '0'. Once a signal has been confined to two states, it's pretty easy to apply Boolean math, which was already well-explored ahead of computers. Binary was an easy choice for computers because so much work was already done in the area of Boolean algebra.
When we needed to increase the range of numbers, we added more signals. Two signals (two bits) could represent 4 distinct values. 3 bits - 8 values, and so-on. But what if, instead of adding more signals to expand our values, we simply divided the existing signals up more. In a 5V system, one signal could represent a number from 1-10 if we divide up the voltage. 0-0.25 volts = 0. 0.25-0.50 volts = 1. 0.50-0.75 volts = 2, etc. In theory, each signal would carry 5x the data a binary signal could. But why stop there? Why not split each signal into 100 distinct values?
Well, for the same reason we never went further than binary - environmental interference and lack of precision components. You need to be able to precisely measure the voltages to determine the value, and if those voltages change, your system becomes unreliable. All types of factors can affect electrical voltages, RF, temperature, humidity, metal density, etc. As components age, their tolerances tend to decrease.
Any number of things could have changed this - if you use a different medium - light, for example, interference isn't a concern. This is exactly why fiber-optics can carry so much more data than electrical connections.
The discovery of a room-temperature superconductor could also have allowed different computers to become standard. A superconductor doesn't lose electrons to heat. This means you could pump more voltage through a system without fear of overheating, requiring less precise components and less (no) cooling.
So, in-short, binary computers dominate because of physical limitations related to electricity and the the wealth of knowledge (Boolean Algebra) that was already available when vacuum tubes, transistors, and semiconductors came about. Change any of those factors, and binary computers may never have been.
New contributor
add a comment |
up vote
3
down vote
up vote
3
down vote
Decimal computers.
Modern computers are, indeed, binary. Binary is the classification of an electrical signal as occupying one of two states, conditional on the voltage. For the sake of simplicity, you could say that in a 5V system, anything signal above 4V is a '1' and everything else is a '0'. Once a signal has been confined to two states, it's pretty easy to apply Boolean math, which was already well-explored ahead of computers. Binary was an easy choice for computers because so much work was already done in the area of Boolean algebra.
When we needed to increase the range of numbers, we added more signals. Two signals (two bits) could represent 4 distinct values. 3 bits - 8 values, and so-on. But what if, instead of adding more signals to expand our values, we simply divided the existing signals up more. In a 5V system, one signal could represent a number from 1-10 if we divide up the voltage. 0-0.25 volts = 0. 0.25-0.50 volts = 1. 0.50-0.75 volts = 2, etc. In theory, each signal would carry 5x the data a binary signal could. But why stop there? Why not split each signal into 100 distinct values?
Well, for the same reason we never went further than binary - environmental interference and lack of precision components. You need to be able to precisely measure the voltages to determine the value, and if those voltages change, your system becomes unreliable. All types of factors can affect electrical voltages, RF, temperature, humidity, metal density, etc. As components age, their tolerances tend to decrease.
Any number of things could have changed this - if you use a different medium - light, for example, interference isn't a concern. This is exactly why fiber-optics can carry so much more data than electrical connections.
The discovery of a room-temperature superconductor could also have allowed different computers to become standard. A superconductor doesn't lose electrons to heat. This means you could pump more voltage through a system without fear of overheating, requiring less precise components and less (no) cooling.
So, in-short, binary computers dominate because of physical limitations related to electricity and the the wealth of knowledge (Boolean Algebra) that was already available when vacuum tubes, transistors, and semiconductors came about. Change any of those factors, and binary computers may never have been.
New contributor
Decimal computers.
Modern computers are, indeed, binary. Binary is the classification of an electrical signal as occupying one of two states, conditional on the voltage. For the sake of simplicity, you could say that in a 5V system, anything signal above 4V is a '1' and everything else is a '0'. Once a signal has been confined to two states, it's pretty easy to apply Boolean math, which was already well-explored ahead of computers. Binary was an easy choice for computers because so much work was already done in the area of Boolean algebra.
When we needed to increase the range of numbers, we added more signals. Two signals (two bits) could represent 4 distinct values. 3 bits - 8 values, and so-on. But what if, instead of adding more signals to expand our values, we simply divided the existing signals up more. In a 5V system, one signal could represent a number from 1-10 if we divide up the voltage. 0-0.25 volts = 0. 0.25-0.50 volts = 1. 0.50-0.75 volts = 2, etc. In theory, each signal would carry 5x the data a binary signal could. But why stop there? Why not split each signal into 100 distinct values?
Well, for the same reason we never went further than binary - environmental interference and lack of precision components. You need to be able to precisely measure the voltages to determine the value, and if those voltages change, your system becomes unreliable. All types of factors can affect electrical voltages, RF, temperature, humidity, metal density, etc. As components age, their tolerances tend to decrease.
Any number of things could have changed this - if you use a different medium - light, for example, interference isn't a concern. This is exactly why fiber-optics can carry so much more data than electrical connections.
The discovery of a room-temperature superconductor could also have allowed different computers to become standard. A superconductor doesn't lose electrons to heat. This means you could pump more voltage through a system without fear of overheating, requiring less precise components and less (no) cooling.
So, in-short, binary computers dominate because of physical limitations related to electricity and the the wealth of knowledge (Boolean Algebra) that was already available when vacuum tubes, transistors, and semiconductors came about. Change any of those factors, and binary computers may never have been.
New contributor
New contributor
answered Nov 26 at 19:44
Robear
1311
1311
New contributor
New contributor
add a comment |
add a comment |
up vote
3
down vote
In the late 1950s analog computers were developed using a hydraulic technology called fluidics. Fluidic processing is still used in automatic transmissions, although newer designs are hybrid electronic/fluidic systems.
New contributor
2
Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
– kingledion
Nov 27 at 15:20
add a comment |
up vote
3
down vote
In the late 1950s analog computers were developed using a hydraulic technology called fluidics. Fluidic processing is still used in automatic transmissions, although newer designs are hybrid electronic/fluidic systems.
New contributor
2
Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
– kingledion
Nov 27 at 15:20
add a comment |
up vote
3
down vote
up vote
3
down vote
In the late 1950s analog computers were developed using a hydraulic technology called fluidics. Fluidic processing is still used in automatic transmissions, although newer designs are hybrid electronic/fluidic systems.
New contributor
In the late 1950s analog computers were developed using a hydraulic technology called fluidics. Fluidic processing is still used in automatic transmissions, although newer designs are hybrid electronic/fluidic systems.
New contributor
New contributor
answered Nov 26 at 23:22
Nik Pfirsig
311
311
New contributor
New contributor
2
Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
– kingledion
Nov 27 at 15:20
add a comment |
2
Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
– kingledion
Nov 27 at 15:20
2
2
Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
– kingledion
Nov 27 at 15:20
Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
– kingledion
Nov 27 at 15:20
add a comment |
up vote
2
down vote
Hypercomputation
According to Wikipedia Hypercomputation is defined to be the following:
Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.
The Church–Turing thesis states that any "effectively computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not effectively computable in the Church–Turing sense.
Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.
What this means is that Hypercomputation can do things computers cannot do. Not in terms of scope limitations such as the ability to access things on a network but rather what can and cannot be fundamentally solved as a mathematical problem.
Consider this. Can a computer store the square root of 2 and operate on it? Well maybe because it could store the coefficients of the polynomial whose solution is that square root and then index the solutions to that polynomial. Alright, so we can the represent so called algebraic numbers (at least I believe so). What about all real numbers? Euler's constant and pi are likely candidates for being unrepresentable in any meaningful sense using binary. We can approximate but we cannot have perfect representations. We could have pi be a special symbol as well as e and just increase the symbolic set. Still not good enough. That's the primary thing that hops to mind to me at least. The ability to digitally compute any real number with perfect precision.
This would be a reason for such a society to never discover binary computers being useful. At some point we switched from analog to binary because of electrical needs and signal stuff. I honestly do not know the details. We modeled the modern notion of processor and other things loosely off of the notion of a Turing Machine which was ultimately the form way of discussing computability which was kind of a multi faceted convergence of sorts. There was the idea of something being human computable and then theoretically computable. The rough abstract definition used for many years ended up converging with that of the notion of the Turing Machine. There was also the set theory concept of something or other (I don't recall the name) that ended up also converging to defining the same exact same concept of "computable". All of these converging basically meant it was said and done. That is what we as a society (or even as the human race for that matter) were able to come up with as a notion of what is and is not programmable. However, that is the convergence of possibly over 3000 years of mathematical development possibly beginning as far back in concept as Euclid when he formalized the most basic concepts of theorems and axioms. Sure math existed but it was just a tool. Nobody had a formal notion of it. Things are just obvious and known. If Hypercomputation is possible for humans to do (rather than it just being a thing limited to machines) then all it would take is one genius in the entire history of math to crack that. I'd say it is a reasonable thing for an alternate history.
add a comment |
up vote
2
down vote
Hypercomputation
According to Wikipedia Hypercomputation is defined to be the following:
Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.
The Church–Turing thesis states that any "effectively computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not effectively computable in the Church–Turing sense.
Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.
What this means is that Hypercomputation can do things computers cannot do. Not in terms of scope limitations such as the ability to access things on a network but rather what can and cannot be fundamentally solved as a mathematical problem.
Consider this. Can a computer store the square root of 2 and operate on it? Well maybe because it could store the coefficients of the polynomial whose solution is that square root and then index the solutions to that polynomial. Alright, so we can the represent so called algebraic numbers (at least I believe so). What about all real numbers? Euler's constant and pi are likely candidates for being unrepresentable in any meaningful sense using binary. We can approximate but we cannot have perfect representations. We could have pi be a special symbol as well as e and just increase the symbolic set. Still not good enough. That's the primary thing that hops to mind to me at least. The ability to digitally compute any real number with perfect precision.
This would be a reason for such a society to never discover binary computers being useful. At some point we switched from analog to binary because of electrical needs and signal stuff. I honestly do not know the details. We modeled the modern notion of processor and other things loosely off of the notion of a Turing Machine which was ultimately the form way of discussing computability which was kind of a multi faceted convergence of sorts. There was the idea of something being human computable and then theoretically computable. The rough abstract definition used for many years ended up converging with that of the notion of the Turing Machine. There was also the set theory concept of something or other (I don't recall the name) that ended up also converging to defining the same exact same concept of "computable". All of these converging basically meant it was said and done. That is what we as a society (or even as the human race for that matter) were able to come up with as a notion of what is and is not programmable. However, that is the convergence of possibly over 3000 years of mathematical development possibly beginning as far back in concept as Euclid when he formalized the most basic concepts of theorems and axioms. Sure math existed but it was just a tool. Nobody had a formal notion of it. Things are just obvious and known. If Hypercomputation is possible for humans to do (rather than it just being a thing limited to machines) then all it would take is one genius in the entire history of math to crack that. I'd say it is a reasonable thing for an alternate history.
add a comment |
up vote
2
down vote
up vote
2
down vote
Hypercomputation
According to Wikipedia Hypercomputation is defined to be the following:
Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.
The Church–Turing thesis states that any "effectively computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not effectively computable in the Church–Turing sense.
Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.
What this means is that Hypercomputation can do things computers cannot do. Not in terms of scope limitations such as the ability to access things on a network but rather what can and cannot be fundamentally solved as a mathematical problem.
Consider this. Can a computer store the square root of 2 and operate on it? Well maybe because it could store the coefficients of the polynomial whose solution is that square root and then index the solutions to that polynomial. Alright, so we can the represent so called algebraic numbers (at least I believe so). What about all real numbers? Euler's constant and pi are likely candidates for being unrepresentable in any meaningful sense using binary. We can approximate but we cannot have perfect representations. We could have pi be a special symbol as well as e and just increase the symbolic set. Still not good enough. That's the primary thing that hops to mind to me at least. The ability to digitally compute any real number with perfect precision.
This would be a reason for such a society to never discover binary computers being useful. At some point we switched from analog to binary because of electrical needs and signal stuff. I honestly do not know the details. We modeled the modern notion of processor and other things loosely off of the notion of a Turing Machine which was ultimately the form way of discussing computability which was kind of a multi faceted convergence of sorts. There was the idea of something being human computable and then theoretically computable. The rough abstract definition used for many years ended up converging with that of the notion of the Turing Machine. There was also the set theory concept of something or other (I don't recall the name) that ended up also converging to defining the same exact same concept of "computable". All of these converging basically meant it was said and done. That is what we as a society (or even as the human race for that matter) were able to come up with as a notion of what is and is not programmable. However, that is the convergence of possibly over 3000 years of mathematical development possibly beginning as far back in concept as Euclid when he formalized the most basic concepts of theorems and axioms. Sure math existed but it was just a tool. Nobody had a formal notion of it. Things are just obvious and known. If Hypercomputation is possible for humans to do (rather than it just being a thing limited to machines) then all it would take is one genius in the entire history of math to crack that. I'd say it is a reasonable thing for an alternate history.
Hypercomputation
According to Wikipedia Hypercomputation is defined to be the following:
Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.
The Church–Turing thesis states that any "effectively computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not effectively computable in the Church–Turing sense.
Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.
What this means is that Hypercomputation can do things computers cannot do. Not in terms of scope limitations such as the ability to access things on a network but rather what can and cannot be fundamentally solved as a mathematical problem.
Consider this. Can a computer store the square root of 2 and operate on it? Well maybe because it could store the coefficients of the polynomial whose solution is that square root and then index the solutions to that polynomial. Alright, so we can the represent so called algebraic numbers (at least I believe so). What about all real numbers? Euler's constant and pi are likely candidates for being unrepresentable in any meaningful sense using binary. We can approximate but we cannot have perfect representations. We could have pi be a special symbol as well as e and just increase the symbolic set. Still not good enough. That's the primary thing that hops to mind to me at least. The ability to digitally compute any real number with perfect precision.
This would be a reason for such a society to never discover binary computers being useful. At some point we switched from analog to binary because of electrical needs and signal stuff. I honestly do not know the details. We modeled the modern notion of processor and other things loosely off of the notion of a Turing Machine which was ultimately the form way of discussing computability which was kind of a multi faceted convergence of sorts. There was the idea of something being human computable and then theoretically computable. The rough abstract definition used for many years ended up converging with that of the notion of the Turing Machine. There was also the set theory concept of something or other (I don't recall the name) that ended up also converging to defining the same exact same concept of "computable". All of these converging basically meant it was said and done. That is what we as a society (or even as the human race for that matter) were able to come up with as a notion of what is and is not programmable. However, that is the convergence of possibly over 3000 years of mathematical development possibly beginning as far back in concept as Euclid when he formalized the most basic concepts of theorems and axioms. Sure math existed but it was just a tool. Nobody had a formal notion of it. Things are just obvious and known. If Hypercomputation is possible for humans to do (rather than it just being a thing limited to machines) then all it would take is one genius in the entire history of math to crack that. I'd say it is a reasonable thing for an alternate history.
answered 2 days ago
The Great Duck
980411
980411
add a comment |
add a comment |
up vote
1
down vote
Base-10 computing machines were used commercially to control the early telephone switching system. The telephone companies used them because they were solving a base-10 problem. As long as transistors remain larger and more expensive than mechanical relays, then there's no reason for telephone switchboards to switch to binary.
But that's cheating the spirit of the question. Suppose cheap transistors are invented. Then how can a civilization get out of binary computing? Binary logic is the best way to build a electronic deterministic computer with cheap transistors.
Answer: Analog neural networks outperform manually-programmed computers.
Humans are bad at programming computers directly. Manually-programmed computers can perform only simple unambiguous tasks. Statistical programming, also called "machine learning", can answer questions without clear mathematical answers. Machine learning can answer questions like "is this a picture of a frog". Hand-coding an algorithm to determine "is this a picture of a frog" is well beyond the capabilities of human beings. So are more complex tasks like "enforce security at this railroad station" and "take care of my grandmother in her old age".
Manually-programmed software outnumbers neural-network-based software right now, but that might plausibly be just a phase. Manually-programmed software is easier to create. In a few hundred years, neural-network-based software might outnumber manually-programmed so
One of the most promising methods avenues of machine learning involves neural networks, which uses ideas copied from biological brains. If we invent a good general-purpose AI then it might take the form of a neural network, especially if the AI is based off of the human brain.
If you're designing a computer to execute traditional programs then binary is the best way to go. But if the goal of a microchip is to simulate a human brain then it may be inefficient to build a binary computer and then simulate a human brain on it. It might make more sense to build a neural network into hardware directly. The human brain is an analog device, so a microchip based off of the human brain may be an analog device too.
If someone figured out how to build a powerful general-purpose AI as an analog neural network then chips optimized for neural networks may largely replace binary computers.
add a comment |
up vote
1
down vote
Base-10 computing machines were used commercially to control the early telephone switching system. The telephone companies used them because they were solving a base-10 problem. As long as transistors remain larger and more expensive than mechanical relays, then there's no reason for telephone switchboards to switch to binary.
But that's cheating the spirit of the question. Suppose cheap transistors are invented. Then how can a civilization get out of binary computing? Binary logic is the best way to build a electronic deterministic computer with cheap transistors.
Answer: Analog neural networks outperform manually-programmed computers.
Humans are bad at programming computers directly. Manually-programmed computers can perform only simple unambiguous tasks. Statistical programming, also called "machine learning", can answer questions without clear mathematical answers. Machine learning can answer questions like "is this a picture of a frog". Hand-coding an algorithm to determine "is this a picture of a frog" is well beyond the capabilities of human beings. So are more complex tasks like "enforce security at this railroad station" and "take care of my grandmother in her old age".
Manually-programmed software outnumbers neural-network-based software right now, but that might plausibly be just a phase. Manually-programmed software is easier to create. In a few hundred years, neural-network-based software might outnumber manually-programmed so
One of the most promising methods avenues of machine learning involves neural networks, which uses ideas copied from biological brains. If we invent a good general-purpose AI then it might take the form of a neural network, especially if the AI is based off of the human brain.
If you're designing a computer to execute traditional programs then binary is the best way to go. But if the goal of a microchip is to simulate a human brain then it may be inefficient to build a binary computer and then simulate a human brain on it. It might make more sense to build a neural network into hardware directly. The human brain is an analog device, so a microchip based off of the human brain may be an analog device too.
If someone figured out how to build a powerful general-purpose AI as an analog neural network then chips optimized for neural networks may largely replace binary computers.
add a comment |
up vote
1
down vote
up vote
1
down vote
Base-10 computing machines were used commercially to control the early telephone switching system. The telephone companies used them because they were solving a base-10 problem. As long as transistors remain larger and more expensive than mechanical relays, then there's no reason for telephone switchboards to switch to binary.
But that's cheating the spirit of the question. Suppose cheap transistors are invented. Then how can a civilization get out of binary computing? Binary logic is the best way to build a electronic deterministic computer with cheap transistors.
Answer: Analog neural networks outperform manually-programmed computers.
Humans are bad at programming computers directly. Manually-programmed computers can perform only simple unambiguous tasks. Statistical programming, also called "machine learning", can answer questions without clear mathematical answers. Machine learning can answer questions like "is this a picture of a frog". Hand-coding an algorithm to determine "is this a picture of a frog" is well beyond the capabilities of human beings. So are more complex tasks like "enforce security at this railroad station" and "take care of my grandmother in her old age".
Manually-programmed software outnumbers neural-network-based software right now, but that might plausibly be just a phase. Manually-programmed software is easier to create. In a few hundred years, neural-network-based software might outnumber manually-programmed so
One of the most promising methods avenues of machine learning involves neural networks, which uses ideas copied from biological brains. If we invent a good general-purpose AI then it might take the form of a neural network, especially if the AI is based off of the human brain.
If you're designing a computer to execute traditional programs then binary is the best way to go. But if the goal of a microchip is to simulate a human brain then it may be inefficient to build a binary computer and then simulate a human brain on it. It might make more sense to build a neural network into hardware directly. The human brain is an analog device, so a microchip based off of the human brain may be an analog device too.
If someone figured out how to build a powerful general-purpose AI as an analog neural network then chips optimized for neural networks may largely replace binary computers.
Base-10 computing machines were used commercially to control the early telephone switching system. The telephone companies used them because they were solving a base-10 problem. As long as transistors remain larger and more expensive than mechanical relays, then there's no reason for telephone switchboards to switch to binary.
But that's cheating the spirit of the question. Suppose cheap transistors are invented. Then how can a civilization get out of binary computing? Binary logic is the best way to build a electronic deterministic computer with cheap transistors.
Answer: Analog neural networks outperform manually-programmed computers.
Humans are bad at programming computers directly. Manually-programmed computers can perform only simple unambiguous tasks. Statistical programming, also called "machine learning", can answer questions without clear mathematical answers. Machine learning can answer questions like "is this a picture of a frog". Hand-coding an algorithm to determine "is this a picture of a frog" is well beyond the capabilities of human beings. So are more complex tasks like "enforce security at this railroad station" and "take care of my grandmother in her old age".
Manually-programmed software outnumbers neural-network-based software right now, but that might plausibly be just a phase. Manually-programmed software is easier to create. In a few hundred years, neural-network-based software might outnumber manually-programmed so
One of the most promising methods avenues of machine learning involves neural networks, which uses ideas copied from biological brains. If we invent a good general-purpose AI then it might take the form of a neural network, especially if the AI is based off of the human brain.
If you're designing a computer to execute traditional programs then binary is the best way to go. But if the goal of a microchip is to simulate a human brain then it may be inefficient to build a binary computer and then simulate a human brain on it. It might make more sense to build a neural network into hardware directly. The human brain is an analog device, so a microchip based off of the human brain may be an analog device too.
If someone figured out how to build a powerful general-purpose AI as an analog neural network then chips optimized for neural networks may largely replace binary computers.
answered Nov 27 at 12:20
lsusr
32617
32617
add a comment |
add a comment |
up vote
1
down vote
One simple change would be to make solid-state electronics impossible. Either your planet doesn't have abundant silicon, or there is some chemical issue which makes it uneconomic to manufacture semiconductors.
Instead, consider what would happen if Charles Babbage's mechanical computer designs (which were intrinsically decimal devices, just like the mechanical calculators which already existed in Babbage's day) were scaled down to nano-engineering size.
The earliest computers used vacuum tube electronics not semiconductors. The basic design of vacuum tube memory circuits was already known by 1920, long before the first computers, but for large scale computer memory tubes would have been prohibitively large, power-hungry, and unreliable. The earliest computers used various alternative systems - some of which were in effect mechanical, not electrical. So the notion of totally mechanical computers does have some relation to actual history.
add a comment |
up vote
1
down vote
One simple change would be to make solid-state electronics impossible. Either your planet doesn't have abundant silicon, or there is some chemical issue which makes it uneconomic to manufacture semiconductors.
Instead, consider what would happen if Charles Babbage's mechanical computer designs (which were intrinsically decimal devices, just like the mechanical calculators which already existed in Babbage's day) were scaled down to nano-engineering size.
The earliest computers used vacuum tube electronics not semiconductors. The basic design of vacuum tube memory circuits was already known by 1920, long before the first computers, but for large scale computer memory tubes would have been prohibitively large, power-hungry, and unreliable. The earliest computers used various alternative systems - some of which were in effect mechanical, not electrical. So the notion of totally mechanical computers does have some relation to actual history.
add a comment |
up vote
1
down vote
up vote
1
down vote
One simple change would be to make solid-state electronics impossible. Either your planet doesn't have abundant silicon, or there is some chemical issue which makes it uneconomic to manufacture semiconductors.
Instead, consider what would happen if Charles Babbage's mechanical computer designs (which were intrinsically decimal devices, just like the mechanical calculators which already existed in Babbage's day) were scaled down to nano-engineering size.
The earliest computers used vacuum tube electronics not semiconductors. The basic design of vacuum tube memory circuits was already known by 1920, long before the first computers, but for large scale computer memory tubes would have been prohibitively large, power-hungry, and unreliable. The earliest computers used various alternative systems - some of which were in effect mechanical, not electrical. So the notion of totally mechanical computers does have some relation to actual history.
One simple change would be to make solid-state electronics impossible. Either your planet doesn't have abundant silicon, or there is some chemical issue which makes it uneconomic to manufacture semiconductors.
Instead, consider what would happen if Charles Babbage's mechanical computer designs (which were intrinsically decimal devices, just like the mechanical calculators which already existed in Babbage's day) were scaled down to nano-engineering size.
The earliest computers used vacuum tube electronics not semiconductors. The basic design of vacuum tube memory circuits was already known by 1920, long before the first computers, but for large scale computer memory tubes would have been prohibitively large, power-hungry, and unreliable. The earliest computers used various alternative systems - some of which were in effect mechanical, not electrical. So the notion of totally mechanical computers does have some relation to actual history.
answered 2 days ago
alephzero
1,56527
1,56527
add a comment |
add a comment |
up vote
1
down vote
Morse code rules.
https://www.kaspersky.com/blog/telegraph-grandpa-of-internet/9034/
Just as modern keyboards retain the QWERTY of the first typewriters, in your world the trinary code of Morse becomes the language of computers. Computers developed to rapidly send and receive messages naturally use this language to send messages other than written language, and then to communicate between parts of themselves.
There are apparently technical reasons making binary more efficient. https://www.reddit.com/r/askscience/comments/hmy7w/if_morse_is_more_efficient_than_binary_why_dont/
I am fairly certain that there would be more efficient setups than QWERTY as well, but now many decades since there was a need for keys to be spatially distant there is still QWERTY. So too Morse in your world. It was always the language of computers, and endures as such.
1
Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
– kingledion
2 days ago
But Morse code is binary
– endolith
40 mins ago
add a comment |
up vote
1
down vote
Morse code rules.
https://www.kaspersky.com/blog/telegraph-grandpa-of-internet/9034/
Just as modern keyboards retain the QWERTY of the first typewriters, in your world the trinary code of Morse becomes the language of computers. Computers developed to rapidly send and receive messages naturally use this language to send messages other than written language, and then to communicate between parts of themselves.
There are apparently technical reasons making binary more efficient. https://www.reddit.com/r/askscience/comments/hmy7w/if_morse_is_more_efficient_than_binary_why_dont/
I am fairly certain that there would be more efficient setups than QWERTY as well, but now many decades since there was a need for keys to be spatially distant there is still QWERTY. So too Morse in your world. It was always the language of computers, and endures as such.
1
Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
– kingledion
2 days ago
But Morse code is binary
– endolith
40 mins ago
add a comment |
up vote
1
down vote
up vote
1
down vote
Morse code rules.
https://www.kaspersky.com/blog/telegraph-grandpa-of-internet/9034/
Just as modern keyboards retain the QWERTY of the first typewriters, in your world the trinary code of Morse becomes the language of computers. Computers developed to rapidly send and receive messages naturally use this language to send messages other than written language, and then to communicate between parts of themselves.
There are apparently technical reasons making binary more efficient. https://www.reddit.com/r/askscience/comments/hmy7w/if_morse_is_more_efficient_than_binary_why_dont/
I am fairly certain that there would be more efficient setups than QWERTY as well, but now many decades since there was a need for keys to be spatially distant there is still QWERTY. So too Morse in your world. It was always the language of computers, and endures as such.
Morse code rules.
https://www.kaspersky.com/blog/telegraph-grandpa-of-internet/9034/
Just as modern keyboards retain the QWERTY of the first typewriters, in your world the trinary code of Morse becomes the language of computers. Computers developed to rapidly send and receive messages naturally use this language to send messages other than written language, and then to communicate between parts of themselves.
There are apparently technical reasons making binary more efficient. https://www.reddit.com/r/askscience/comments/hmy7w/if_morse_is_more_efficient_than_binary_why_dont/
I am fairly certain that there would be more efficient setups than QWERTY as well, but now many decades since there was a need for keys to be spatially distant there is still QWERTY. So too Morse in your world. It was always the language of computers, and endures as such.
answered 2 days ago
Willk
97.6k25188409
97.6k25188409
1
Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
– kingledion
2 days ago
But Morse code is binary
– endolith
40 mins ago
add a comment |
1
Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
– kingledion
2 days ago
But Morse code is binary
– endolith
40 mins ago
1
1
Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
– kingledion
2 days ago
Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
– kingledion
2 days ago
But Morse code is binary
– endolith
40 mins ago
But Morse code is binary
– endolith
40 mins ago
add a comment |
up vote
1
down vote
Oh, man. Although non-binary computers would be extremely inconvenient, I can easily imagine trinary, decimal or even analog computers becoming dominant, due to a terrible force all developers fear: entrenchment, and the need for legacy support. Lots of times in computing history, we've struggled with decisions made long ago, which we just couldn't break free of (for a long time) from sheer inertia. There's a lot of stuff even in modern processors which we never would have chosen, if it wasn't for the need to support existing software and architecture decisions.
So for your scenario, I imagine that for some reason one type of non-binary computer got a head start. Maybe for many years, computers didn't improve all that much due to some calamity. But software was still written for these weak computers, extremely useful and good software, tons of it. By the time things got going again, it was just much more profitable to focus on making trinary (or whatever) better, rather than trying to redo all the work Ninetel put into their 27-trit processor.
Sure, there are some weirdos claiming that binary is so much more sensible that it's worth it to make a processor that's BISC (binary instruction set circuit) in the bottom with a trinary emulation layer on top. But after the bankruptcy of Transbita, venture capital has mostly lost interest in these projects.
New contributor
add a comment |
up vote
1
down vote
Oh, man. Although non-binary computers would be extremely inconvenient, I can easily imagine trinary, decimal or even analog computers becoming dominant, due to a terrible force all developers fear: entrenchment, and the need for legacy support. Lots of times in computing history, we've struggled with decisions made long ago, which we just couldn't break free of (for a long time) from sheer inertia. There's a lot of stuff even in modern processors which we never would have chosen, if it wasn't for the need to support existing software and architecture decisions.
So for your scenario, I imagine that for some reason one type of non-binary computer got a head start. Maybe for many years, computers didn't improve all that much due to some calamity. But software was still written for these weak computers, extremely useful and good software, tons of it. By the time things got going again, it was just much more profitable to focus on making trinary (or whatever) better, rather than trying to redo all the work Ninetel put into their 27-trit processor.
Sure, there are some weirdos claiming that binary is so much more sensible that it's worth it to make a processor that's BISC (binary instruction set circuit) in the bottom with a trinary emulation layer on top. But after the bankruptcy of Transbita, venture capital has mostly lost interest in these projects.
New contributor
add a comment |
up vote
1
down vote
up vote
1
down vote
Oh, man. Although non-binary computers would be extremely inconvenient, I can easily imagine trinary, decimal or even analog computers becoming dominant, due to a terrible force all developers fear: entrenchment, and the need for legacy support. Lots of times in computing history, we've struggled with decisions made long ago, which we just couldn't break free of (for a long time) from sheer inertia. There's a lot of stuff even in modern processors which we never would have chosen, if it wasn't for the need to support existing software and architecture decisions.
So for your scenario, I imagine that for some reason one type of non-binary computer got a head start. Maybe for many years, computers didn't improve all that much due to some calamity. But software was still written for these weak computers, extremely useful and good software, tons of it. By the time things got going again, it was just much more profitable to focus on making trinary (or whatever) better, rather than trying to redo all the work Ninetel put into their 27-trit processor.
Sure, there are some weirdos claiming that binary is so much more sensible that it's worth it to make a processor that's BISC (binary instruction set circuit) in the bottom with a trinary emulation layer on top. But after the bankruptcy of Transbita, venture capital has mostly lost interest in these projects.
New contributor
Oh, man. Although non-binary computers would be extremely inconvenient, I can easily imagine trinary, decimal or even analog computers becoming dominant, due to a terrible force all developers fear: entrenchment, and the need for legacy support. Lots of times in computing history, we've struggled with decisions made long ago, which we just couldn't break free of (for a long time) from sheer inertia. There's a lot of stuff even in modern processors which we never would have chosen, if it wasn't for the need to support existing software and architecture decisions.
So for your scenario, I imagine that for some reason one type of non-binary computer got a head start. Maybe for many years, computers didn't improve all that much due to some calamity. But software was still written for these weak computers, extremely useful and good software, tons of it. By the time things got going again, it was just much more profitable to focus on making trinary (or whatever) better, rather than trying to redo all the work Ninetel put into their 27-trit processor.
Sure, there are some weirdos claiming that binary is so much more sensible that it's worth it to make a processor that's BISC (binary instruction set circuit) in the bottom with a trinary emulation layer on top. But after the bankruptcy of Transbita, venture capital has mostly lost interest in these projects.
New contributor
New contributor
answered yesterday
Harald Korneliussen
1111
1111
New contributor
New contributor
add a comment |
add a comment |
up vote
1
down vote
They made quantum computing work much more quickly than we have
Why have binary state, when you can have ifinite?
They probably had binary computers for a short time, then cracked quantum.
"What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?"
Someone cracked a cheap room temperature way to make qbits
(ref: https://medium.com/@jackkrupansky/the-greatest-challenges-for-quantum-computing-are-hardware-and-algorithms-c61061fa1210)
add a comment |
up vote
1
down vote
They made quantum computing work much more quickly than we have
Why have binary state, when you can have ifinite?
They probably had binary computers for a short time, then cracked quantum.
"What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?"
Someone cracked a cheap room temperature way to make qbits
(ref: https://medium.com/@jackkrupansky/the-greatest-challenges-for-quantum-computing-are-hardware-and-algorithms-c61061fa1210)
add a comment |
up vote
1
down vote
up vote
1
down vote
They made quantum computing work much more quickly than we have
Why have binary state, when you can have ifinite?
They probably had binary computers for a short time, then cracked quantum.
"What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?"
Someone cracked a cheap room temperature way to make qbits
(ref: https://medium.com/@jackkrupansky/the-greatest-challenges-for-quantum-computing-are-hardware-and-algorithms-c61061fa1210)
They made quantum computing work much more quickly than we have
Why have binary state, when you can have ifinite?
They probably had binary computers for a short time, then cracked quantum.
"What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?"
Someone cracked a cheap room temperature way to make qbits
(ref: https://medium.com/@jackkrupansky/the-greatest-challenges-for-quantum-computing-are-hardware-and-algorithms-c61061fa1210)
answered 14 hours ago
GreenAsJade
26717
26717
add a comment |
add a comment |
up vote
1
down vote
The creatures involved have three fingers, a ternary numeral system in everyday life, and the technical advantages of binary over ternary aren't as great as the advantages (radix economy, etc) of binary over decimal, so they never bothered to use a system other than the one they naturally knew innately?
New contributor
add a comment |
up vote
1
down vote
The creatures involved have three fingers, a ternary numeral system in everyday life, and the technical advantages of binary over ternary aren't as great as the advantages (radix economy, etc) of binary over decimal, so they never bothered to use a system other than the one they naturally knew innately?
New contributor
add a comment |
up vote
1
down vote
up vote
1
down vote
The creatures involved have three fingers, a ternary numeral system in everyday life, and the technical advantages of binary over ternary aren't as great as the advantages (radix economy, etc) of binary over decimal, so they never bothered to use a system other than the one they naturally knew innately?
New contributor
The creatures involved have three fingers, a ternary numeral system in everyday life, and the technical advantages of binary over ternary aren't as great as the advantages (radix economy, etc) of binary over decimal, so they never bothered to use a system other than the one they naturally knew innately?
New contributor
edited 53 mins ago
New contributor
answered 22 hours ago
endolith
1114
1114
New contributor
New contributor
add a comment |
add a comment |
up vote
0
down vote
Binary computers are simply the most efficient ones, claims to the contrary nonwithstanding (they are based on pretty exotic assumptions, the linked claim that "the future lies in analog computers" is even hilariously wrong though I can see where Ulmann comes from).
Binary computers are simply the most cost and space efficient option if the technology is based on transistors: A ternary computer would require more transistors than a binary one to store or process the same amount of information. The reason is that electrically, the distinction between "zero volts" and "five volts" is more like "anything below 2.0 volts" and "anything above 3.0 volts", which is much easier to control than ternary voltage levels such as "below 1.0 volts, between 2.0 and 3.0 volts, or between 4.0 and 5.0 volts". Yes you need the gaps between the voltage bands, because you need to deal with imprecisions due to noise (manufacturing spread and electric imprecisions); and yes the gaps are pretty large because the larger the gap, the more variance in the integrated circuits is inconsequential and the better is your yield (which is THE most important parameter of submicron manufacturing).
How to get around this?
Either change the driving parameters. In an economy where efficiency isn't even remotely relevant, you can choose convenience. Such an economy will instantly collapse as soon as it touches a more efficient one, so this requires either an isolated economy (Soviet-style or even North-Korea-style, though it requires some extra creativity to design a world economy where a massively less efficient economy isn't voted down on feet - historically this was enforced by oppressive regimes, it might be possible that the people stay at a lower level of income and goods for other reasons).
Or claim basic components that are better at being trinary than transistors. Somebody with a better background in microelectronics than me might be able to propose something that sounds credible, or maybe something that isn't based on classic electrical currents: quantum devices, maybe, or something photonic.
Why is this not done much in literature?
Because, ultimately, it does not matter much whether you have a bits or trits. Either way, you bunch together as many of them as you need to represent N decimal digits. Software engineers don't care much, unless they are the ones that write the basic algorithms for addition/subtraction/etc., or the ones that write the algorithms that needs to be fast (i.e. those that deal with large amounts of data, whether it's a huge list of addresses, or the pixels on the screen).
Some accidental numbers would likely change. Bunching 8 bits into a byte is helpful because 8 is a power of 2, that's why 256 (2^8) tends to pop up in the number of screen colors and various other occasions. With trinary computers, you'd likely use trytes, nine trits, giving 19683 values. HDR would be much later or not happen at all because RGB would already have more color nuances, so there would be some nonobvious differences.
You can simply make it a background fact, never highlight it, just to avoid the explanation.
Which begs the counter-question: What's the plot device you need trinary for?
1
Your answer boils down to: "Your world isn't interesting", which isn't very helpful
– pipe
Nov 27 at 10:03
1
"Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
– alephzero
2 days ago
@alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
– toolforger
2 days ago
add a comment |
up vote
0
down vote
Binary computers are simply the most efficient ones, claims to the contrary nonwithstanding (they are based on pretty exotic assumptions, the linked claim that "the future lies in analog computers" is even hilariously wrong though I can see where Ulmann comes from).
Binary computers are simply the most cost and space efficient option if the technology is based on transistors: A ternary computer would require more transistors than a binary one to store or process the same amount of information. The reason is that electrically, the distinction between "zero volts" and "five volts" is more like "anything below 2.0 volts" and "anything above 3.0 volts", which is much easier to control than ternary voltage levels such as "below 1.0 volts, between 2.0 and 3.0 volts, or between 4.0 and 5.0 volts". Yes you need the gaps between the voltage bands, because you need to deal with imprecisions due to noise (manufacturing spread and electric imprecisions); and yes the gaps are pretty large because the larger the gap, the more variance in the integrated circuits is inconsequential and the better is your yield (which is THE most important parameter of submicron manufacturing).
How to get around this?
Either change the driving parameters. In an economy where efficiency isn't even remotely relevant, you can choose convenience. Such an economy will instantly collapse as soon as it touches a more efficient one, so this requires either an isolated economy (Soviet-style or even North-Korea-style, though it requires some extra creativity to design a world economy where a massively less efficient economy isn't voted down on feet - historically this was enforced by oppressive regimes, it might be possible that the people stay at a lower level of income and goods for other reasons).
Or claim basic components that are better at being trinary than transistors. Somebody with a better background in microelectronics than me might be able to propose something that sounds credible, or maybe something that isn't based on classic electrical currents: quantum devices, maybe, or something photonic.
Why is this not done much in literature?
Because, ultimately, it does not matter much whether you have a bits or trits. Either way, you bunch together as many of them as you need to represent N decimal digits. Software engineers don't care much, unless they are the ones that write the basic algorithms for addition/subtraction/etc., or the ones that write the algorithms that needs to be fast (i.e. those that deal with large amounts of data, whether it's a huge list of addresses, or the pixels on the screen).
Some accidental numbers would likely change. Bunching 8 bits into a byte is helpful because 8 is a power of 2, that's why 256 (2^8) tends to pop up in the number of screen colors and various other occasions. With trinary computers, you'd likely use trytes, nine trits, giving 19683 values. HDR would be much later or not happen at all because RGB would already have more color nuances, so there would be some nonobvious differences.
You can simply make it a background fact, never highlight it, just to avoid the explanation.
Which begs the counter-question: What's the plot device you need trinary for?
1
Your answer boils down to: "Your world isn't interesting", which isn't very helpful
– pipe
Nov 27 at 10:03
1
"Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
– alephzero
2 days ago
@alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
– toolforger
2 days ago
add a comment |
up vote
0
down vote
up vote
0
down vote
Binary computers are simply the most efficient ones, claims to the contrary nonwithstanding (they are based on pretty exotic assumptions, the linked claim that "the future lies in analog computers" is even hilariously wrong though I can see where Ulmann comes from).
Binary computers are simply the most cost and space efficient option if the technology is based on transistors: A ternary computer would require more transistors than a binary one to store or process the same amount of information. The reason is that electrically, the distinction between "zero volts" and "five volts" is more like "anything below 2.0 volts" and "anything above 3.0 volts", which is much easier to control than ternary voltage levels such as "below 1.0 volts, between 2.0 and 3.0 volts, or between 4.0 and 5.0 volts". Yes you need the gaps between the voltage bands, because you need to deal with imprecisions due to noise (manufacturing spread and electric imprecisions); and yes the gaps are pretty large because the larger the gap, the more variance in the integrated circuits is inconsequential and the better is your yield (which is THE most important parameter of submicron manufacturing).
How to get around this?
Either change the driving parameters. In an economy where efficiency isn't even remotely relevant, you can choose convenience. Such an economy will instantly collapse as soon as it touches a more efficient one, so this requires either an isolated economy (Soviet-style or even North-Korea-style, though it requires some extra creativity to design a world economy where a massively less efficient economy isn't voted down on feet - historically this was enforced by oppressive regimes, it might be possible that the people stay at a lower level of income and goods for other reasons).
Or claim basic components that are better at being trinary than transistors. Somebody with a better background in microelectronics than me might be able to propose something that sounds credible, or maybe something that isn't based on classic electrical currents: quantum devices, maybe, or something photonic.
Why is this not done much in literature?
Because, ultimately, it does not matter much whether you have a bits or trits. Either way, you bunch together as many of them as you need to represent N decimal digits. Software engineers don't care much, unless they are the ones that write the basic algorithms for addition/subtraction/etc., or the ones that write the algorithms that needs to be fast (i.e. those that deal with large amounts of data, whether it's a huge list of addresses, or the pixels on the screen).
Some accidental numbers would likely change. Bunching 8 bits into a byte is helpful because 8 is a power of 2, that's why 256 (2^8) tends to pop up in the number of screen colors and various other occasions. With trinary computers, you'd likely use trytes, nine trits, giving 19683 values. HDR would be much later or not happen at all because RGB would already have more color nuances, so there would be some nonobvious differences.
You can simply make it a background fact, never highlight it, just to avoid the explanation.
Which begs the counter-question: What's the plot device you need trinary for?
Binary computers are simply the most efficient ones, claims to the contrary nonwithstanding (they are based on pretty exotic assumptions, the linked claim that "the future lies in analog computers" is even hilariously wrong though I can see where Ulmann comes from).
Binary computers are simply the most cost and space efficient option if the technology is based on transistors: A ternary computer would require more transistors than a binary one to store or process the same amount of information. The reason is that electrically, the distinction between "zero volts" and "five volts" is more like "anything below 2.0 volts" and "anything above 3.0 volts", which is much easier to control than ternary voltage levels such as "below 1.0 volts, between 2.0 and 3.0 volts, or between 4.0 and 5.0 volts". Yes you need the gaps between the voltage bands, because you need to deal with imprecisions due to noise (manufacturing spread and electric imprecisions); and yes the gaps are pretty large because the larger the gap, the more variance in the integrated circuits is inconsequential and the better is your yield (which is THE most important parameter of submicron manufacturing).
How to get around this?
Either change the driving parameters. In an economy where efficiency isn't even remotely relevant, you can choose convenience. Such an economy will instantly collapse as soon as it touches a more efficient one, so this requires either an isolated economy (Soviet-style or even North-Korea-style, though it requires some extra creativity to design a world economy where a massively less efficient economy isn't voted down on feet - historically this was enforced by oppressive regimes, it might be possible that the people stay at a lower level of income and goods for other reasons).
Or claim basic components that are better at being trinary than transistors. Somebody with a better background in microelectronics than me might be able to propose something that sounds credible, or maybe something that isn't based on classic electrical currents: quantum devices, maybe, or something photonic.
Why is this not done much in literature?
Because, ultimately, it does not matter much whether you have a bits or trits. Either way, you bunch together as many of them as you need to represent N decimal digits. Software engineers don't care much, unless they are the ones that write the basic algorithms for addition/subtraction/etc., or the ones that write the algorithms that needs to be fast (i.e. those that deal with large amounts of data, whether it's a huge list of addresses, or the pixels on the screen).
Some accidental numbers would likely change. Bunching 8 bits into a byte is helpful because 8 is a power of 2, that's why 256 (2^8) tends to pop up in the number of screen colors and various other occasions. With trinary computers, you'd likely use trytes, nine trits, giving 19683 values. HDR would be much later or not happen at all because RGB would already have more color nuances, so there would be some nonobvious differences.
You can simply make it a background fact, never highlight it, just to avoid the explanation.
Which begs the counter-question: What's the plot device you need trinary for?
answered Nov 27 at 6:49
toolforger
471
471
1
Your answer boils down to: "Your world isn't interesting", which isn't very helpful
– pipe
Nov 27 at 10:03
1
"Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
– alephzero
2 days ago
@alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
– toolforger
2 days ago
add a comment |
1
Your answer boils down to: "Your world isn't interesting", which isn't very helpful
– pipe
Nov 27 at 10:03
1
"Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
– alephzero
2 days ago
@alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
– toolforger
2 days ago
1
1
Your answer boils down to: "Your world isn't interesting", which isn't very helpful
– pipe
Nov 27 at 10:03
Your answer boils down to: "Your world isn't interesting", which isn't very helpful
– pipe
Nov 27 at 10:03
1
1
"Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
– alephzero
2 days ago
"Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
– alephzero
2 days ago
@alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
– toolforger
2 days ago
@alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
– toolforger
2 days ago
add a comment |
up vote
0
down vote
Cryptocurrency scammers having convinced sufficiently many big corporations and governments to become partners in their pyramid scheme that economies of scale make their inefficient and ridiculous ternary-logic hardware cheaper than properly-designed computers.
add a comment |
up vote
0
down vote
Cryptocurrency scammers having convinced sufficiently many big corporations and governments to become partners in their pyramid scheme that economies of scale make their inefficient and ridiculous ternary-logic hardware cheaper than properly-designed computers.
add a comment |
up vote
0
down vote
up vote
0
down vote
Cryptocurrency scammers having convinced sufficiently many big corporations and governments to become partners in their pyramid scheme that economies of scale make their inefficient and ridiculous ternary-logic hardware cheaper than properly-designed computers.
Cryptocurrency scammers having convinced sufficiently many big corporations and governments to become partners in their pyramid scheme that economies of scale make their inefficient and ridiculous ternary-logic hardware cheaper than properly-designed computers.
answered yesterday
R..
36637
36637
add a comment |
add a comment |
up vote
-1
down vote
The strength of binary is that it's fundamentally a yes/no logic system, the weakness of binary is that it is fundamentally a yes/no logic system, you need multiple layers of logic to create "yes and" statements with binary logic. The smallest change you would need to make to change away from binary (in terms of having the rest of the world being the same but computing being different) would be to have the people who pioneered the science of computers, particularly Turing (thanks @Renan) aim for, and demand, more complex arrays of basic logic outcomes (a, b, c, etc... vary combinations, all of the above, none of the above). Complex outcome options require more complex inputs, more complex logic gates and a more complex programming language: consequently computers will be more expensive, more delicate, and harder to program.
A few people might mess around with binary for really basic machines, like pocket calculators, but true computers will be more complex machines.
1
You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
– Renan
Nov 26 at 14:34
@Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
– Ash
Nov 26 at 14:36
Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
– chasly from UK
Nov 26 at 14:44
@chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
– Ash
Nov 26 at 14:53
@Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
– AlexP
Nov 27 at 11:06
add a comment |
up vote
-1
down vote
The strength of binary is that it's fundamentally a yes/no logic system, the weakness of binary is that it is fundamentally a yes/no logic system, you need multiple layers of logic to create "yes and" statements with binary logic. The smallest change you would need to make to change away from binary (in terms of having the rest of the world being the same but computing being different) would be to have the people who pioneered the science of computers, particularly Turing (thanks @Renan) aim for, and demand, more complex arrays of basic logic outcomes (a, b, c, etc... vary combinations, all of the above, none of the above). Complex outcome options require more complex inputs, more complex logic gates and a more complex programming language: consequently computers will be more expensive, more delicate, and harder to program.
A few people might mess around with binary for really basic machines, like pocket calculators, but true computers will be more complex machines.
1
You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
– Renan
Nov 26 at 14:34
@Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
– Ash
Nov 26 at 14:36
Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
– chasly from UK
Nov 26 at 14:44
@chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
– Ash
Nov 26 at 14:53
@Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
– AlexP
Nov 27 at 11:06
add a comment |
up vote
-1
down vote
up vote
-1
down vote
The strength of binary is that it's fundamentally a yes/no logic system, the weakness of binary is that it is fundamentally a yes/no logic system, you need multiple layers of logic to create "yes and" statements with binary logic. The smallest change you would need to make to change away from binary (in terms of having the rest of the world being the same but computing being different) would be to have the people who pioneered the science of computers, particularly Turing (thanks @Renan) aim for, and demand, more complex arrays of basic logic outcomes (a, b, c, etc... vary combinations, all of the above, none of the above). Complex outcome options require more complex inputs, more complex logic gates and a more complex programming language: consequently computers will be more expensive, more delicate, and harder to program.
A few people might mess around with binary for really basic machines, like pocket calculators, but true computers will be more complex machines.
The strength of binary is that it's fundamentally a yes/no logic system, the weakness of binary is that it is fundamentally a yes/no logic system, you need multiple layers of logic to create "yes and" statements with binary logic. The smallest change you would need to make to change away from binary (in terms of having the rest of the world being the same but computing being different) would be to have the people who pioneered the science of computers, particularly Turing (thanks @Renan) aim for, and demand, more complex arrays of basic logic outcomes (a, b, c, etc... vary combinations, all of the above, none of the above). Complex outcome options require more complex inputs, more complex logic gates and a more complex programming language: consequently computers will be more expensive, more delicate, and harder to program.
A few people might mess around with binary for really basic machines, like pocket calculators, but true computers will be more complex machines.
edited Nov 26 at 14:35
answered Nov 26 at 14:29
Ash
26k465144
26k465144
1
You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
– Renan
Nov 26 at 14:34
@Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
– Ash
Nov 26 at 14:36
Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
– chasly from UK
Nov 26 at 14:44
@chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
– Ash
Nov 26 at 14:53
@Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
– AlexP
Nov 27 at 11:06
add a comment |
1
You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
– Renan
Nov 26 at 14:34
@Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
– Ash
Nov 26 at 14:36
Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
– chasly from UK
Nov 26 at 14:44
@chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
– Ash
Nov 26 at 14:53
@Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
– AlexP
Nov 27 at 11:06
1
1
You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
– Renan
Nov 26 at 14:34
You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
– Renan
Nov 26 at 14:34
@Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
– Ash
Nov 26 at 14:36
@Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
– Ash
Nov 26 at 14:36
Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
– chasly from UK
Nov 26 at 14:44
Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
– chasly from UK
Nov 26 at 14:44
@chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
– Ash
Nov 26 at 14:53
@chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
– Ash
Nov 26 at 14:53
@Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
– AlexP
Nov 27 at 11:06
@Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
– AlexP
Nov 27 at 11:06
add a comment |
Thanks for contributing an answer to Worldbuilding Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fworldbuilding.stackexchange.com%2fquestions%2f131296%2fwhat-is-the-most-reasonable-way-for-non-binary-computers-to-have-become-standard%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
11
Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
– RonJohn
Nov 26 at 15:00
7
Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
– OrangeDog
Nov 26 at 15:25
2
The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
– Aaron F
Nov 26 at 16:20
5
We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
– J...
Nov 26 at 17:03
5
Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
– Bert Haddad
2 days ago