THINKfast recovered

After months of searching culminating in about six hours of frustrating VirtualBox setup, I’m finally able to play THINKfast, albeit on a Windows 98 virtual machine.

THINKfast was marketed as “brain-training software.” As far as I know, the evidence that any such software produces gains in intelligence transferred beyond the game itself is minimal. What’s important about THINKfast is that it’s a good measure of general intelligence, even promoted for that purpose by legendary psychometrics researcher Arthur Jensen. You can read about THINKfast’s psychometric properties on pumpkinperson’s blog here.

THINKfast won’t run on Windows 10, even in compatibility mode. Setting up the virtual machine required to play it on a modern system is time-consuming and requires following instructions fastidiously. If you’re nevertheless interested in playing it, contact me for instructions and technical support, but I won’t hold your hand through the entire process.

A possible problem is that scores may vary between systems due to lag in input or output. My VirtualBox setup seems as responsive as a typical personal computer, but even imperceptible delays might warp both norms and individual performances against those norms. Nonetheless, I’ll post my scores once they’ve stabilized. This may take a while as: (1) the games usually show large practice effects over initial runs; (2) eventual plateau scores correlate much more highly with intelligence than initial scores do; and (3) the games get harder, and thus make higher scores feasible, as long as you are improving. I am currently at Alpha-Gold after 5 runs.

THINKfast running in a VirtualBox virtual machine of Windows 98.

Typing speed and IQ

I wonder what the correlation between typing speed and IQ is. I doubt it would be very high in the general population owing to the confounding variable of computer proficiency, although that probably correlates somewhat with IQ as well. But as for plateau speed for experienced typists, I bet the correlation is at least moderate, considering how much it resembles a simple “speed” task like on the Processing Speed Index of the Wechsler tests.

I’m one of the fastest typists I know, way faster than even most professional transcriptionists, but still nowhere near the world’s elite. Here are some of my TypeRacer statistics:

Avg. speed (last 10 races): 124 WPM

Best race: 157 WPM

Rank (WPM percentile): 99.8% [remember that this is relative to people who play a competitive typing game, so the percentile in the general population would almost certainly be far higher than even this]

But all of this still pales in comparison, at least from a simple numerical perspective, to my 200 WPM run on the “captcha” task, which allows for some uncorrected typos:

200 WPM at 97% accuracy on the TypeRacer captcha test

A Deep Thought

(To provide some content for this blog‘s grand opening, I’ve decided to recycle articles I’ve written previously, and in some cases published elsewhere. This is one such article. It was originally published, with minor differences, in issue #139 of the Glia Society’s journal, Thoth, in December 2019.)

In The Hitchhiker’s Guide to the Galaxy, the supercomputer “Deep Thought” determines that the answer to Life, the Universe, and Everything is 42. Unfortunately, it hasn’t figured out what the question is.

Now suppose that perceptrons, which are commonly called “artificial neurons” in the context of machine learning, could be utilized in a computer program such that there would be a one-to-one mapping between the functions of those perceptrons and the functions of natural neurons. Or, in simpler terms: suppose that “copying” a brain’s neural configuration into perceptrons would result in an artificial neural network that would perfectly mimic the capabilities of that brain. Then, we need to figure out how much processing power that would take to simulate. Here’s a quick-and-dirty estimate:

The human brain contains about 85,000,000,000 neurons, each of which has about 1,000 synapses, each of which fires about 1,000 times per second. Therefore, our perceptron-simulation would require ~1 * 1017 floating-point operations per second (FLOPS), assuming that one FLOP is equivalent to one synapse firing, which I’ll be the first to admit I’m not sure is a correct assumption. This is a mere 100 peta-FLOPS, which could be reached by a cluster of about 1,000 Nvidia Titan V graphics cards. (This is indubitably a better use for all of that hashing and fossil fuel power than mining Dunning-Krugerrands, a.k.a. Bitcoins.) Even if this estimate needs to be, for whatever reason, adjusted upward by several orders of magnitude, it becomes apparent that we can already build supercomputers with more than enough processing power to run a software equivalent of a human brain! Only if the estimate is many orders of magnitude below the real value does this train of thought derail.

The problem, then, is figuring out how to configure the perceptrons. It currently appears infeasible to scan a human brain and then encode it into perceptrons, so our only option is to have the neural network “evolve” through machine learning. In order to do this, the artificial neural network would have to experience selection pressure towards general intelligence, but current techniques can only provide selection pressure towards extremely specific abilities, like clustering stolen credit card numbers or facial identification of Muslims in Xinjiang. If we could create a loss function that would force evolution towards general intelligence, then the “invisible hand” of the evolutionary market would probably solve the problem for us within a reasonable time frame.

Unfortunately, general intelligence is far too complicated to be modeled by current computer programmers, which is why “autodidactic” programs like perceptron networks were created in the first place! The machine learning engineer simply defines the desired output and then throws matrix multiplications at the problem until it’s resolved. Even though we probably have all of the resources we need to create the answer to the problem of artificial general intelligence, we can’t do it because don’t know how to phrase the question.

So far, I’ve only thought of one solution (“solution” in the sense of Jeopardy!): simulate a system so complicated that having general intelligence provides agents within that system with more evolutionary fitness than any task-specific mental abilities could. This would probably require simulating real-life physical reality on a cosmic scale, or something similar to that, which is far beyond the limits of current computing technology. Expanding the limits of computational tractability should therefore probably be one of the prime factors for providing the decryption key to artificial intelligence. Yog-Sothoth is the gate. Yog-Sothoth knows the gate. Yog-Sothoth is the gate. Yog-Sothoth is the key and guardian of the gate. Past, present, future, all are one in Yog-Sothoth.