• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

Reinventing Traveller

They weren't wrong ... it was used for a long time.
Note that "long time" does not equate to "eternity" ...
According to the TIOBE Index Fortran is making a come-back. Like, jumping from 50th to top 20 in one month (it's 19th right now). The newst version doesn't look that bad, and concurrent processes are built right in.

So, eternity, anyone? Or at least the 3000(-ish) years required to get to the Third Imperium...
 
Re: Computers, this discussion comes up fairly regularly, and last time in the fb group someone who does data center installations said CT computers were small, so much of the argument seems to be processing power or size of installation. I know from being in construction management, and seeing in the ads in the trade magazines, usually for HVAC equipment, that indeed data center installations are a fair bit larger than CT computers. Also certain things apply, like protection from being fried by radiation, or the Apollo vehicle's computer was minimal, though that 99% of the actual work had been done in large ground installations. So that one could assume that the size requirements are not so bad, except the program size, and such could be out of date, though that is sort of a mini-game in itself, switching around programs.
Fair point. And quantum computers require a lot of hardware and power to keep them (reliably) in the 1-digit Kelvin range. And intersteller navigation via Jump drives, as well as "AI", will undoubtedly require fairly large quantum computers.
 
Fair point. And quantum computers require a lot of hardware and power to keep them (reliably) in the 1-digit Kelvin range. And intersteller navigation via Jump drives, as well as "AI", will undoubtedly require fairly large quantum computers.
Yes, I agree, could be possible. I wouldn't argue against someone liking what they like, the energy is better spent creating something.
 
Why does AI require a large quantum computer? The best I know of run on 100w or so and fit in a human skull!

Birds have basic AI capability and they are notoriously bird-brained.

The issue may be the criteria for defining AI.
 
Because we can't make small quantum computers yet.

true story time - I once showed my class a video of a food puzzle that a raven was about to solve, not one single student in the class could suggest a solution. The bird solved it in seconds.
 
I tend to think it's the tyranny of options, you're distracted by all the possibilities that you can conceive of at that moment, whereas the crow would be more focussed.
 
Why does AI require a large quantum computer? The best I know of run on 100w or so and fit in a human skull!

Birds have basic AI capability and they are notoriously bird-brained.

The issue may be the criteria for defining AI.
the bar has constantly been moved over the years as to what qualifies as AI. General AI and not machine learning. Lots of various machine learning that can do great at specific things, but nothing really as far as general AI.

Heck - we can't even define what intelligence even is, let alone what an artificial intelligence is.
 
I believe both Pascal and Ada were in the future (or in the works...) when Traveller came out in '77. The IBM 1401 could run LISP, though.
Pascal: released 1970
ADA: US DOD RFP 1975. Alpha test 1977 with four competitors. 1980 release 1 as ANSI/MIL-STD 1815.

So, Pascal was in use, and ADA (while not yet named) was in alpha competition, in '77. Technically, it wasn't released, but was in DOD use, alongside 3 other contenders, in evaluation.

 
the bar has constantly been moved over the years as to what qualifies as AI. General AI and not machine learning. Lots of various machine learning that can do great at specific things, but nothing really as far as general AI.

Heck - we can't even define what intelligence even is, let alone what an artificial intelligence is.
there is a solid definition... but it's a very problematic one...

The Wechsler tests (WAIS & WISC) and their revisions, the Kaufman ABC, and a handful of foreign equivalents measure a specific mixture of skills. These include (across the range): short term distracted memory - numeric and multi word -- for periods of abut 1-3 minutes, sequencing images, coding/decoding, basic general knowledge, pattern completion, pattern replication, language comprehension, story restatement, speed and accuracy of counting objects amidst a wider array of objects, and ability to follow a line in a field of other lines.
The IQ score is derived from comparing results of the individual to an array of averages by age of norming... 100 is the mean score for your age, one standard deviation is 15 points; 60% of all people fall within 85 to 115.
Experimentally, it tends to stay similar for an individual across their adult lifespan, barring brain injury, trending downward. Brain injury almost universally results in permanent losses.

Oh,, and the problems? It presumes being inculturated into the test-normed culture, and having normal manipulative skills. (One friend with cerebral palsy scored lower than me, despite doing better in most general ed courses (and I had a 3.2 cum in Undergad, 3.8 in major). She couldn't do the physical manipulations for the pattern matching and pattern manipulation, nor the drawing of lines required... Essentially, penalized for semi-paralysis.

These tests use an inferential basis for it, defining it as "what we're testing for" - a mixture of cultural competence, pattern recognition/replication/modification, coding/decoding symbols, and comprehension of the culturally dominant language and its common sense elements.

It's also worth noting that, given a flashed set of numbers and switched to a set of blank markers, chimps can usually exceed 12 numbers' locations tapped in ascending order, while college undergrads tend to get only to about 7 for the same 1 second exposure. Banobos do better, but I can't recall how much better.

Traveller's working definition of Intelligence divorces the knowledge portion... that's Edu. It also avoids the cultural issues (we don't have Int when dealing with aliens), while the social aspects are conflated with social classes and castes and are adjusted for cultures.
 
there is a solid definition... but it's a very problematic one...
solid according to some (as you noted, very problematic). I've a more philosophical background oddly enough mixed in with computer science (admittedly the 2 degrees were almost 20 years apart, and err, the 2nd over 20 years ago...). those various tests and things are, as pointed out, culturally biased: not just specific societal biases, but human-biased as for what we think is intelligence. Like the whole stellar system generation: we've really a single point of reference, Earth, with 1 established intelligence (well, at least dominant) and several other assumed intelligences (as mentioned, re: chimps, banobos, etc)

anyway, we're expanding our knowledge of system generation as we get better telescopes and actually see more systems (well, see is still a stretch, but make very good interpretations of the data as we get better data). I feel we won't be able to define intelligence other than the "if it walks like a duck and quacks like a duck, it must be a duck" approximations to ourselves. Why alien contact books and movies are always interesting to me as will we even be able to have any understanding at all?

Fun fact: my master's thesis was "ethical implications of artificial intelligence" and after 50-some pages, the basic conclusion was that we cannot tell what intelligence is, but we can certainly decide that something that at least mimics our own self-concept of intelligence should be treated as intelligent. Yes, more circular reasoning with no answer in the end.

Anyone remember ELIZA (Wikipedia)? not intelligent in any sense of the word, yet that very simple program managed to trick a lot of people into thinking it was intelligent. We are way past that now in certain domains, but definitely nowhere near a generalized AI.
 
solid according to some (as you noted, very problematic). I've a more philosophical background oddly enough mixed in with computer science (admittedly the 2 degrees were almost 20 years apart, and err, the 2nd over 20 years ago...). those various tests and things are, as pointed out, culturally biased: not just specific societal biases, but human-biased as for what we think is intelligence. Like the whole stellar system generation: we've really a single point of reference, Earth, with 1 established intelligence (well, at least dominant) and several other assumed intelligences (as mentioned, re: chimps, banobos, etc)
The interesting thing about IQ testing in the US is the repeatability of scoring and cross-test comparability of scoring - that's why I agree with the pros that it's a solid definition.

Even as it's frustratingly useless in predicting student outcomes in academic and career success. Or even offspring IQ....
 
The things I don’t get in traveller is, if you can duplicate a brain to the point that the information completely stored and allowed the reconstruction of/rewriting of personality, why would there not be AI much earlier in the tech cycle. After all, the available processing power is quite trivial by jump calculation standards.
 
Because you are not duplicating the brain, you are mapping the memory and personality.

A TL12+ robot would likely fool us today into thinking it is self-aware and sentient, but true machine sentience doesn't come along until much higher TLs.

You can run wafer personalities on a computer, probably keeping them occupied in some sort of virtual worlds.

But if you cannot solve multidimensional field equations in your head neither can a wafer personality, whereas a jump generate program can.
 
Expected patterns of behaviour, and extrapolated reactions.

A rather massive chess programme, possibly run by a quantum computer.
 
If a machine learns and can fool us into believing it is sentient, doesn’t that suggest that it is sentient - or we aren’t. After all, we learn and fool each other we are sentient despite some evidence to the contrary!
 
We have lots of machines that learn already - are they sentient? They were not programmed to communicate with us or to feign emotion, but they do learn how to do their assigned tasks better.
 
I've only ever made two real changes to the game,
1. Humanoids with furry heads, just spoiled the realism for me. I re-imagine Vargr, Aslan etc as humans with modified DNA, so they look fairly human.
2. Skills start at 18 years old for everyone. I felt this is unrealistic when there are exceptions in real life and in fiction e.g. Rey Skywalker, River from Firefly or Dayna from Blake's 7. In reality Dayna has a ton of skills including several different low-tech weapons, laser weapons, energy weapons, handgun, experimental weapons, weapons-tech, computers, martial arts and it's plausible for a home-schooled daughter of a scientist Prospero type character. Besides home-schooled, or child-soldiers like River, the realities of life mean that some childhoods are cut short e.g. Rey Skywalker, so I think it is ok to have a spattering of highly skilled youths here and there sometimes.
 
Back
Top