Wednesday, June 26, 2013

Nice Hack-a-Day review of my MicroGeiger - a circuit that connects a Geiger tube to the headset jack of your mobile phone in place of the microphone, and software to use it.

My hobby project (open source Geiger counter hardware + software for Android) got a nice mention on Hack-a-Day:

We’re no stranger to radiation detector builds, but [Dmytry]‘s MicroGeiger prototype is one of the smallest and most useful we’ve seen.

The idea behind the MicroGeiger comes from the observation that just about every modern smartphone can provide a small bit of power through the microphone jack. Usually this is used for a microphone, but with the right circuit it can be stepped up enough to power a Geiger tube.

more

Here's my page about the project, describing the operation in greater detail. Also, the hand would transformer can now be substituted for with a very available alternative: an inductor from a broken compact fluorescent lightbulb, with a couple small extra windings added to it.



A few photos:





Saturday, May 18, 2013

New ion chamber


A prototype simple ion chamber radiation detector built by my girlfriend and me:


It is an ion chamber based on LMP7721 operational amplifier. LMP7721 is very convenient for building ion chambers, as it has very low input offset current ( 3fA typical ), and unlike other such sensitive chips has two back-to-back diodes for input protection, which help you not fry the chip when you turn on the chamber bias voltage or touch the input wire.

Special thanks to National Semiconductor / Texas Instruments for generously providing engineering samples for this project, and to Eduard for 10 TOhm feedback resistor.

Circuit diagram:


Errata: 47k adjustable resistor should be 4.7k (the dot got lost after scanning..). Note: the voltage stabilizer (part on the right) is fairly inefficient, and is built this way because I didn't have any better parts to use. I recommend making something more efficient to conserve the battery life.

Construction notes: we mounted the LMP7721 dead-bug style (upside down), using air to insulate the input wire (the circuit board would leak too much). The 10M resistor at the negative input of the LMP7721 protects it from over-voltage during power-on or in the event that you touch the input wire.

I recommend to first built the circuit without the LMP7721, then tin the wires that will connect to LMP7721, making sure to evaporate any remaining flux, then solder them to LMP7721 without using any flux (to avoid depositing any flux residue on the LMP7721). First bend the wires so they touch the relevant pins, then gently solder them with the very tip of the soldering iron. I used lead-free solder (96.5%Sn, 3%Ag, 0.5%Cu if I recall correctly), which in addition to being legal and less toxic than lead based solder, has an advantage of being less likely to be contaminated with radioactive lead 210 (which is a decay product of U-238). I do not know, though, if lead-free solder is more likely to be contaminated with something else. I tested this solder in my other ion chamber to make sure it is not radioactive. In the final version of this circuit I will probably drill a special hole at the #8th input of the op-amp to keep it air-insulated.

After construction, you have to leave it on overnight to let all static charges on the dielectrics dissipate - until they dissipate, they mess up the readings.

A few things I may add/change about it:

1: The lid of this project box is too thin and the chamber reacts wildly if you push on the lid (like a condenser microphone). I'm going to add extra foil cover which would not be mechanically coupled to the lid to minimize this effect. Ideally, I should ground the shielding and add a separate ion chamber box, which would also decrease potential effects due to charge deposition on the plastic parts inside the chamber.

2: Add a packet of desiccant.

3: Add a charger circuit and seal the box against moisture.

4: Add magnetically activated reed switches for shorting over the 10T resistor and the filtering resistor in the bias supply, thus permitting easier zero adjustment.

Peek inside:


This project was inspired by
http://techlib.com/science/ion.html

Sunday, April 21, 2013

Applied Irrationality vs Enrico Fermi

So, there's these folks that call themselves "rationalists". You may have seen them online. They refer to "Bayes" a lot, run paid "rationality workshops", collect money to save the world from skynet, and the like. You might begin to wonder, what do they actually know about rationality?

Generally, the truth value of the statements they'd make is a pretty elusive matter - they focus on un-testable propositions such as which interpretation of quantum mechanics is "obviously correct" if you don't know enough of it to even compute atomic orbitals, whenever future super-intelligence is super enough to restore your consciousness from a frozen brain, or how many lives you save by giving them a dollar.

But once in a blue moon they get bold and make well defined statements. Being Half Rational About Pascal's Wager is Even Worse is one of such pieces where the rationalism can actually be tested against facts.

Listen to this:
For example.  At one critical junction in history, Leo Szilard, the first physicist to see the possibility of fission chain reactions and hence practical nuclear weapons, was trying to persuade Enrico Fermi to take the issue seriously, in the company of a more prestigious friend, Isidor Rabi:
I said to him:  "Did you talk to Fermi?"  Rabi said, "Yes, I did."  I said, "What did Fermi say?"  Rabi said, "Fermi said 'Nuts!'"  So I said, "Why did he say 'Nuts!'?" and Rabi said, "Well, I don't know, but he is in and we can ask him." So we went over to Fermi's office, and Rabi said to Fermi, "Look, Fermi, I told you what Szilard thought and you said ‘Nuts!' and Szilard wants to know why you said ‘Nuts!'" So Fermi said, "Well… there is the remote possibility that neutrons may be emitted in the fission of uranium and then of course perhaps a chain reaction can be made." Rabi said, "What do you mean by ‘remote possibility'?" and Fermi said, "Well, ten per cent." Rabi said, "Ten per cent is not a remote possibility if it means that we may die of it.  If I have pneumonia and the doctor tells me that there is a remote possibility that I might die, and it's ten percent, I get excited about it."  (Quoted in 'The Making of the Atomic Bomb' by Richard Rhodes.)
This might look at first like a successful application of "multiplying a low probability by a high impact", but I would reject that this was really going on.  Where the heck did Fermi get that 10% figure for his 'remote possibility', especially considering that fission chain reactions did in fact turn out to be possible?  If some sort of reasoning had told us that a fission chain reaction was improbable, then after it turned out to be reality, good procedure would have us go back and check our reasoning to see what went wrong, and figure out how to adjust our way of thinking so as to not make the same mistake again.  So far as I know, there was no physical reason whatsoever to think a fission chain reaction was only a ten percent probability.  They had not been demonstrated experimentally, to be sure; but they were still the default projection from what was already known. If you'd been told in the 1930s that fission chain reactions were impossible, you would've been told something that implied new physical facts unknown to current science (and indeed, no such facts existed).
It almost looks like he's trying to paint in your mind a picture: Enrico Fermi knows that fission happens, but Fermi is too irrational or dull to conclude that chain reaction is possible. Other one of these self proclaimed rational super-geniuses made similar assumptions at one of those "rationality workshops". Ohh Fermi wasn't stupid, he was just stuck in that irrational thinking which we'll teach you to avoid.

Easy picture to paint when nuclear fission is associated with chain reaction, but no. Just no.

What Fermi did know was that nuclei can be fissioned by neutrons, meaning, that when neutron hits heavy nuclei, they *sometimes* split in two roughly equal pieces (sometimes they even split into three pieces, and often do not split at all). Rarely, I must add, because they were doing experiments with natural uranium which is mostly uranium 238, or U238, and U238 tends to absorb neutrons without splitting.

What was entirely unknown at the time of Fermi's estimate, is that any neutrons get emitted immediately as the nucleus splits (or even after), that it is enough neutrons to sustain the chain reaction (you need substantially more than 1 because neutrons also get captured without fissioning), and that they have the right energy.

So, fission does not logically imply there are neutrons emitted, and even if there are neutrons emitted, you need a complicated, quantitative calculation to see if there will be a self sustaining chain reaction or not. Many of the more stable nuclei can be fissioned, but can not sustain chain reaction; for example, U238.

What Fermi and Szilard did first was to do quantitative measurements and find out if there are secondary neutrons, how many on average, and of what energy, and how often are they absorbed before causing fission. Which was very difficult as all evidence is very indirect and many calculations must be made to get from what you measure to what happens.

When they actually did the necessary measurements and learned the facts from which to conclude possibility of a chain reaction without new facts, Fermi did the relevant calculations and concluded that chain reaction was possible, with probability fairly close to 1. He proceeded to do the calculations to find the size of, and build, a nuclear reactor, with quite clever safety system, with reasonable expectation that it will work (and reasonable precautions about any yet unknown positive feedback effects).

Contrary to the impression you might get from reading Yudkowsky, once the quantitative measurements actually implied the possibility of a self sustaining chain reaction unless there's some new facts, Fermi of course assigned fairly high probability to it. Extrapolation from known facts was never an issue.

You're welcome to research the story of Chicago Pile 1 and check for yourself.

Case closed as far as Fermi goes. Now on to learning the lessons from someone being very wrong, this someone not being Enrico Fermi.
 
Was that historical info some esoteric knowledge which is hard to hunt down? No, it is apparently explained even in the same book that Yudkowsky is quoting :

Fermi was not misleading Szilard. It was easy to estimate the explosive force of a quantity of uranium, as Fermi would do standing at his office window overlooking Manhattan, if fission proceeded automatically from mere assembly of the material; even journalists had managed that simple calculation. But such obviously was not the case for uranium in its natural form, or the substance would long ago have ceased to exist on earth. However energetically interesting a reaction, fission by itself was merely a laboratory curiosity. Only if it released secondary neutrons, and those in sufficient quantity to initiate and sustain a chain reaction, would it serve for anything more. "Nothing known then," writes Herbert Anderson, Fermi's young partner in experiment, "guaranteed the emission of neutrons. Neutron emission had to be observed experimentally and measured quantitatively." No such work had yet been done. It was, in fact, the new work Fermi had proposed to Anderson immediately upon returning from Washington. Which meant to Fermi that talk of developing fission into a weapon of war was absurdly premature.
 as well as in many other books. They didn't read it. Lesson: don't write about things you do not know.

What did they read instead to draw their conclusions from? Another quote from Yudkowsky:

After reading enough historical instances of famous scientists dismissing things as impossible when there was no physical logic to say that it was even improbable, one cynically suspects that some prestigious scientists perhaps came to conceive of themselves as senior people who ought to be skeptical about things, and that Fermi was just reacting emotionally.  The lesson I draw from this historical case is not that it's a good idea to go around multiplying ten percent probabilities by large impacts, but that Fermi should not have pulled out a number as low as ten percent.
Lesson: Don't do that. Don't scan for instances of scientists being wrong, without learning much else. If you want to learn rationality, learn actual science. This whole fission business was such a mess. It is a true miracle they managed to unravel everything based on very indirect evidence, in such a short time. There is a lot to learn here.

Also, probability is an elusive matter. If there is a self sustaining chain reaction, he's "wrong" for assigning 10%, 90% would be closer, and 99% would be closer still. If there is no self sustaining chain reaction, he's "wrong" for assigning 10% rather than 0.1% . If someone said that the probability of die rolling 1 is 1/6 , rolling 2 is 1/6 , and so on, after die has been looked at, they're 6 times wrong.

But the important thing in probability is that on average 1 out of 10 times that Fermi assigns something 10% probability, the matter should turn out to be true.

Ohh, and on the main enterprise of saving the world: this attitude is precisely the sort of thing that shouldn't be present. If you grossly underestimate even Fermi, and grossly overestimate how much you can understand about such topic based on a very cursory reading, well, your evaluation of the living people, and active subjects, is not going to be any better.

Sunday, March 17, 2013

Yet another Pascal's Wager rebuttal

So, someone shows up and asks for $10, otherwise they'll use their powers outside the matrix to torture you for 10^(10^10) years (and if you give the money, they'll put you in heaven for that many years). You go all rational and give them $10 because the reward is so great and you for god knows what reason aren't certain enough they are lying. The next thing, another guy shows up, promises heaven/hell for the duration of 10^(10^(10^10)) years, and performs a small miracle, proving that he actually got power outside the matrix. But you haven't got enough cash now. You had your money up for grabs by anyone, and someone took them.

Believe it or not, some people fall for this sort of scam, when its about saving the world from robot apocalypse. Rather than argue about possibility of the apocalypse or it's prevention, I'll just argue that in the alternative that apocalypse is coming and can be prevented, far more competent and successful people might want to save the world some day; those people can have a thousandfold larger probability of being correct, and a thousandfold larger probability that they actually do something constructive rather than turn your money into self promotional bullshit and into putting their incompetent asses as co-authors on papers.

But you won't have the money any more because, well, you had your money just waiting to be grabbed by a hustler, and one of those took your money.

I'm thinking that partial beliefs are to blame here. If you were to consistently believe that the end is high - picture that there's an undeniable proof that an asteroid will destroy all life on Earth in 40 years - you ought to be quite a lot more concerned that the money don't go to some crackpots who'll be the first to draft some cold fusion rocket plans. Everyone knows what a non-scam looks like - leader putting in his own money, quitting lucrative job, and so on.

Tuesday, February 5, 2013

Algorithmic probability: an explanation for programmers



Suppose you do a double slit experiment using low luminosity light source, and a night vision device for the viewer.

You see a flash of light - a photon has hit the detector. Then you see another flash of light. If you draw the flashes on grid paper you will, over the time, see something like this image on the right:

Suppose you want to predict your future observations. Doing so is called 'induction' - trying to infer the future observations from the past observations.

A fairly general, formalized way to do so is the following: You have a hypothesis pool, initially consisting of all possible hypotheses. Each hypothesis is a computer program that outputs a movie (like computer demos). As you look through the night vision device, and see flashes of light, you discard all movies that did not match your observations so far.

To predict the observations, you can look at what the movies show after the current time. But those movies will show different things - how do you pick the most likely one? Note that among the 1024 bit demos (yes, you can write a graphic program in 128 bytes), an 1023 bit demo will appear twice - followed by 1, and followed by 0 (it does not matter what you have after a program); a 1000 bit demo will appear roughly 16 million times, and so on. Thus the movies resulting from shorter demos will be more common - the commonality of a movie will be 2-l where l is the bit content of the demo.

This is the basic idea behind algorithmic probability with a distinction that algorithmic probability utilizes a Turing machine (to which your computer is roughly equivalent, apart for the limited memory).

It is interesting to speculate what those demos may be calculating. Very simple demos can look like a sequence of draw_point(132,631); skipframes(21); draw_point(392,117); and so on, hard coding the points. Demos of this kind which aren't thrown away yet will grow in size proportionally to number of points seen on the screen.

We can do better. The distribution of points in the double slit experiment is not uniform. Demos that map larger fraction of random sequences of bits to more common points on the screen, and smaller fraction of random sequences to less common points on the screen will most commonly produce the observed distribution. For instance, to produce Gaussian distribution, you can count bits set to 1 in a long sequence of random bits.

This sort of demos can resemble quantum mechanics, where the solution would be made by calculating complex amplitudes at the image points, and then converting them into a probability distribution by applying Born rule.

Can we do better still? Can we discard the part where we pick just one point? Can we just output a screen of probabilities?

Not within our framework. First off, Turing machines do not output or process real numbers at all; real numbers have to be approximately encoded in some manner (If you ever hear of Solomonoff induction code 'assigning' probabilities, this works through the code converting subsequent random bit strings into specific observations, producing, in the limit, more of some observations than the others). Secondarily, we haven't observed a screen of probabilities; when you toss a coin once, you don't see a 50% probability of it landing heads, you observe heads (or tails). What we have actually seen was flashes of light, at well defined points. The part of the demo where it picks a single point to draw a flash at is as important as the part where it finds the probability distribution of those points. Those parts are not even separate in our Gaussian distribution example, where the probability distribution is not explicitly computed but instead generated by counting the set bits.

Can we output a multitude of predictions with one code and then look through them for the correct ones? We can't do that without losing predictive power - we'll end up with a very short demo that outputs sequence from a counter, eventually outputting all possible video sequences.

[This blog post is a work in progress; todo is to take photos of double slit experiment using a laser pointer, and to make other illustrations as appropriate]

Saturday, February 2, 2013

On cryopreservation of Kim Suozzi

 http://www.alcor.org/blog/?p=2716

With the inevitable end in sight – and with the cancer continuing to spread throughout her brain – Kim made the brave choice to refuse food and fluids. Even so, it took around 11 days before her body stopped functioning. Around 6:00 am on Thursday January 17, 2013, Alcor was alerted that Kim had stopped breathing. Because Kim’s steadfast boyfriend and family had located Kim just a few minutes away from Alcor, Medical Response Director Aaron Drake arrived almost immediately, followed minutes later by Max More, then two well-trained Alcor volunteers. As soon as a hospice nurse had pronounced clinical death, we began our standard procedures. Stabilization, transport, surgery, and perfusion all went smoothly. A full case report will be forthcoming.
The hidden horror of this defies imagination. There's suffering, there's anxiety, there's fear, and there's burden of choice - anxiously reading about the procedures, second-guessing yourself - are you deluding yourself, are you grasping at straws, or are you being rational? When is the tipping point where you'll deny life support? This kind of decision hurts. I really hope that she simply believed it to be worth a shot and did not have to suffer from the ambiguity of the evidence, but I don't know, and its scary to imagine facing such choices.

Then if she is awakened - sadly, it seems exceedingly likely that almost everyone she knew will be dead, or aged beyond recognition, it is exceedingly likely that she will largely not be herself. No one ever looked if vitrification solution reaches the whole of the human brain - in the parts it won't reach, will be shredded, the parts it does reach, proteins denature. Tossing a book with pictures into a bucket of solvents is not a good way to preserve it's contents when you don't know what the paint is made of, especially if parts of the book not reached by solvents are then shredded.

If you're an altruist, well, there's a lot of people to save via more reliable, not so mentally painful, vastly cheaper methods, to which the majority of the population lacks access. Altruism doesn't explain cryonics promotion. Selfishness does - you believe in cryonics and you want confirmation, cryonics is your trade so you'll pitch it, or you're signed up and you need volunteers to test methods for your own freezing, or you want to feel exclusive and special... A lot of motives, but altruism is not one of them. Promoting cryonics, like any expensive, unproven, highly dubious medical procedure, is not a good thing to do. As far as beliefs go, "you'll almost certainly die but there's a chance you won't" is not the most comforting one.

Speaking of brains. Currently, among other things, I develop software for viewing serial block-face microscopy data, on a contract. This is my private opinion, of course - and I am not a neurologist, my main specialization is graphics, I look at neurons to tune and test the software, I don't quite know what all the little bits around are - I look at a bit and I'm like, what is it? And then I go to re-read description by one of other people at the project, and I am like, ohh, I think it's a mitochondrion inside a dendrite. And then I wonder - why is it here? What does it matter where it is? What is this thing that connects it to the wall? Is it some weird imaging artifact? I do not claim to speak for everyone. I'm doing my part which, among other things, can help to figure out how to preserve brains or how to digitize them.

And my opinion is, in TL;DR; form: "do not promote cryonics for use by humans now". If you want to promote something, malaria vaccines are a good idea, if you want to defend something controversial, there's DDT to kill mosquitos, if you want to defend something overly fancy, there's the mosquito shooting laser. And that's just malaria. There's a lot of other diseases that have well proven cures which aren't available to everyone.

When we have better understanding of the brain, the preservation will almost certainly be cheap and chemical, rather than cryogenic. Cryogenic preservation requires pumping brain full of vitrification solution which prevents ice from forming even at the slow cooling speed of an object as big as human brain. Those concentrations denature proteins, distort things, likely detach things. It is more rational to find right set of chemical fixatives, than to use solvents at denaturing concentrations. Especially if solvents do not even reach the whole brain. Liquid nitrogen is wrong, too - liquid nitrogen is much too cold, different parts of the brain have different thermal expansion coefficient, the whole thing cracks as it cools from the glass transition temperature down to liquid nitrogen temperature. One could write pages about such issues.

Wednesday, January 2, 2013

On Kurzweil's estimates of computer power required for mind uploading

Common transhumanist estimates of computing power necessary for mind uploading strike me as extreme low-ball figures.

To simulate brain correctly would take a lot more computing power than the brain actually can do. Brain is a very complex and messy electronic system, not designed in top down fashion (at least, I am not aware of any transhumanist creationists). The simulation of brain would be more similar to simulation of a weather system than to emulation of a different instruction set. Besides the regular chemical synapses, neurons are coupled: capacitively (electric field), through high speed electrical synapses, through astrocytes, there are voltage-gated ion channels, and a lot, lot more. The neurons have to be simulated compartment by compartment, at small time increments. It may well be that brain has to be simulated as if it was on average 50nm grid, at 10khz time resolution, with about 100 operations per cell step. This would put it at about 1025 FLOPS to simulate human brain. 1025 is nothing to sneer at. This is about a billion times the current best supercomputer (which is at around 1016). And it can still be a low ball estimate. A billion via Moore's law would require shrinkage of current semiconductor process by a factor of 31 thousands. Our current feature size is 22 nanometres, and the silicon lattice spacing is about 0.5 nanometres , so we won't get there by improving current process. The new processes are highly speculative and may not take off for a very, very long time as the current process is also comparatively very cheap in bulk. All the previous computing improvements simultaneously improved performance and decreased the cost; to improve by the factor of a billion, we would need multiple conceptual breakthroughs that would have to compete with existing cheaper alternative right off - and we do not have anything workable in sight. In light of this, I do not expect that Moore's law will continue until mind uploading.

Tuesday, January 1, 2013

Why bitcoin is evil

Bitcoin mining, which is at times profitable (especially when using a botnet), wastes electricity for absolutely nothing. The same effort could be put into, for example, folding@home, or into something enjoyable at least.

That, in my eyes, is reason enough to consider bitcoin to be evil, even without getting into any other details (such as assassination markets). The number one problem with libertarianism is the tragedy of the commons, and waste of energy due to bitcoin is a good example of how libertarians do not even try to respect the commons. On the plus side, the mining is capped and so the amount of damage is limited, so an argument could be made that it is necessary evil.

Monday, December 31, 2012

Yudkowsky's friendly AI, and signalling.

 Here's Eliezer Yudkowsky, a somewhat well known blogger, outlining his beliefs about the artificial intelligence.

The author's unawareness of just how much he falls into the categories that he himself rejects is rather amusing. For instance, at one point he says that good guys would not implement communism into an AI.

Yet, what's about him? He grew up in a democracy, and with parents that thought they knew better than him what he'd want when he grows up (I think most of us can recall parents say that you'll appreciate something they force you to do, when you grow up). And his FAI concept follows mixture of those principles, with addition of very simplistic form of utilitarianism. Politically, it is of course about as neutral as hypothetical Chinese perfect socialism AI. Most people at that point would notice that they are implementing a political system, and try to argue why their political AI is better than hypothetical socialist AI; not Yudkowsky. This guy appears blissfully unaware that his political views are at all political; he won't cooperate with your argument by defending his political views prematurely.

More of that here , where he disses with great contempt other AI researchers of comparable competence to what would naturally be assumed of him.

This appears to be a kind of signalling. Dissing a category that you naturally belong to is a very easy and dirty way to convince some of the listeners and perhaps yourself that you do not belong to this category. Think how much hard work someone would have to do to positively convince you that he is above typical AI crackpot, let alone a visionary with a valid plan for a safer AI! He'd have to invent something like an algorithm for a self driving car, a computer vision system, or something else that crackpots can't do. He'd have to have a track record as a visionary. Something that actually works, or several such things. That level of ability is quite rare, and that's why it is hard to demonstrate. But one could just bypass this and diss typical AI researchers, and a few followers will assume exceptional competence in the field of AI.

Monday, November 12, 2012

Very simple way to control robot with smartphone

A very simple robot controlled from on-board Samsung Galaxy S2 (using my girlfriend's Galaxy Note as remote control. Todo: write some autonomous software using the phone's camera, sensors, etc)


The robot is controlled by drawing white rectangles on the smartphone screen, which activate photodiodes (I used red LEDs as photodiodes).

The robot is built entirely out of various trash and spare components I had:


Close-up on the wheel:

The blue thing on the motor shaft is insulation from some wire, used to increase grip. The wheel is two bottle caps and insulating tape, spinning freely on a thin wire used as a shaft. The weight of the robot presses the wheel and motor shaft together.

Circuit board:


Circuit diagram (I built 2 of this):


Circuit notes: I used red LED as a photodiode, BC337 for t1 and t2, and IRFZ44N for the mosfet, because that's what I had laying around. I've built 2 controllers for both wheels. If you are buying components for that project, I heavily recommend making some different circuit and using a motor controller IC that would allow you to reverse the motor. You can use the D1, T1, T2, R1 combination with pretty much anything. Also, you may want to connect the lower leg of the photodiode D1 to +9 rather than the emitter of T1 (reverse bias the diode) . IRFZ44N is an enormous overkill for these puny motors - it could switch 50 amps of current, at 55 volts.

Thursday, November 8, 2012

Cineplex "Escape from this world"

Full stereoscopic 3D, at 4K resolution, playing in Cineplex theaters in Canada. See a video here.
Clouds were rendered using customized version of Volumetrics .

Saturday, November 3, 2012

A brief note on different types of probability

(Typing it up to reference in discussions; it is by no means a detailed overview)
It is unanimously agreed that statistics depends somehow on probability. But, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel. Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis.
Leonard Savage, The foundations of statistics, 1954.

The origin of the classical probability
When you throw a symmetrical 6-sided die, after sufficiently many bounces, the probability of getting one specific face is 1/6 .

We arrive at this number by considering the symmetry of the die and the physical process of the bouncing of the die. We can not predict which side the die will land at - due to the extreme sensitivity to initial conditions - but the symmetry permits us to conclude something about the way the die will land on average if we are to perform many trials. It is not a philosophical stance that this probability represents the frequency of occurrence in an infinite number of trials - it is just really what it is - in our partial model of the die physics.



The probability as observer's belief

The observer's probability that a die on the table has rolled some number may have to be further adjusted based on extra knowledge. For instance you can look at the die, and see that it rolled 5; now your probability that the die has rolled 5 is nearly 100%.

For example of a more involved event, you can make a robot that throws a die and tosses a coin, and if the coin has landed heads, it tells you the number that the die rolled, otherwise it tells you an uniform random number between 1 and 7 inclusive (which it obtains, say, by spinning a small roulette). Exercise for the reader: the robot gave you 6; what is the probability that the die rolled 6, and what is the probability that robot was answering using the roulette? 

Bayes rule

The Bayes theorem is the rule by which probabilities affect other probabilities in the examples such as this robot problem (which I recommend you to solve on your own).

Uses of probability theory in computer graphics (and other applied physics)

Probability theory is widely used in computer graphics; for instance to calculate illumination values by averaging the number of photons that hit specific area. Pseudo-random number generators are employed in place of the die toss; a pseudo-random number generator is actually rather similar, in essence, to the bouncing of the die. Literal computation of average number of photons is often prohibitively slow for high quality imagery as the error decreases proportionally to inverse square root of number of simulated photons; a wide variety of more advanced methods, for example Metropolis light transport, are used to improve the asymptotic convergence. Sometimes, the convergence can be improved by forcing the photons into a regular, rather than random pattern and re-regularization of the photon field. The Bayes rule also pops in once in a while.



Probability theory as logic of uncertainties

Classical logic processes certain propositions and their relations, to obtain conclusions about the world. The probability theory gives same results as classical logic in the limit of certainty, and can be used to process uncertain propositions.



Hypotheses and probability

Suppose you have a hypothesis that a coin is biased and it always lands heads. How do you test this hypothesis? You can adopt a strategy that you toss the coin 20 times and believe coin to be biased if it always landed heads. Then an unbiased coin will trick you approximately 1 time out of million. Your degree of confidence in the coin being biased is thus described as this: I assume it is biased on basis of success of an experiment which had one in a million chance of non biased coin tricking me. (You can choose the adequate number of tosses in the experiment on basis of the cost of mistake and cost of the toss). This is one of the basic concepts of the scientific method.

But wait, you say. The coin might be biased, or it might not be biased, it is uncertain if it is or isn't! How can I assume it is biased? Wouldn't it be useful to find probability that the coin is biased? If you apriori knew the probability that the coin is biased, you could calculate the probability that coin is biased after performing series of experiments, using the above-mentioned likelihood of being tricked, in combination with Bayes theorem.

So there is a strong desire to assign some more or less arbitrary prior probability for the coin being biased, and then update it using Bayes theorem. While after a multitude of updates you become more correct, it still has all the obvious disadvantages of introducing a made up, arbitrary number into your calculations.

The former representation of the partial resolution of uncertainty by experiment is often called "frequentist" , while the latter is called "Bayesian".

Length-dependent prior probabilities for hypotheses


One can assign lower prior probabilities to more complicated hypotheses. Formally, one can represent a hypothesis with a Turing machine input tape of length l, and assign it probability proportional to 2-l . This is called 'Solomonoff prior'. You can strike out the hypotheses that do not match the observations, this is called Solomonoff Induction.

Note that it is mathematically equivalent to a belief, as a dogmatic certainty, that the ultimate physics of the world we live in is a prefix Turing machine that is fed random bits on the input tape (probability of a specific start sequence is then  2-l) .

Saturday, August 11, 2012

Got to love Torvalds

https://lkml.org/lkml/2012/3/8/495

Because that shows that they don't understand what the whole *point*
of the kernel was after all. We're not masturbating around with some
research project.  We never were. Even when Linux was young, the whole
and only point was to make a *usable* system. It's why it's not some
crazy drug-induced microkernel or other random crazy thing.

 Damn straight. I think OSS projects need a bit more of that, with all the rewriting and such going on and everything breaking all the time.

Thursday, August 9, 2012

Highly exceptional cognitive test scores versus exceptional performance

I was pondering the other day why the best performers such as Nobel prize winning physicists, best mathematicians, and so on, do have somewhat high, but not truly exceptionally high performance on the IQ test and similar tests, while some of the ultra high IQ individuals, or ultra exceptional childhood SAT solvers seem to be smart, but for the lack of other word, not really be that smart or wise or capable as the scores would suggest.

The on-going Olympics provides a clear analogy. The best sprinters will not be the best marathon runners; former have large fast versus slow fibres ratio and metabolism geared more towards anaerobic performance, whereas the latter have primarily slower fibres and metabolism geared towards aerobic performance. Past certain level of exceptional performance, there could be very little overlap between the exceptional performers in those two related, but different sports. Note that the best sprinters will still run marathons a fair bit better than average man, and best marathon runners will still sprint better than average man.

The brain is a fair bit more mysterious organ than the muscle, and we understand it very poorly, but it is the case that there are several well known variables that would represent a trade-off between different types of performance. For instance, glia to neuron ratio. Glia are the support cells that provide nutrition to neurons as well as remove the metabolism by-products of the neurons; furthermore glia have been recently found to be implicated in the memory.

Glia to neuron ratio should influence the cognitive performance, and it appears highly unlikely that the optimal ratio for the tests would precisely coincide with the optimal ratio for the insight making or the real world performance. There are many other such trade-offs within the brain. Hormonal levels, thickness of myelin shealths, short range vs long range connectivity, gray matter vs white matter... you can continue this list for pages.

The short term tests consisting of large number of disjointed questions which do not involve a significant body of learned knowledge (outside verbal, which represents a highly specialized brain region and very special type of learned knowledge) seem even more distant from insight making or long term work than sprint is different from a marathon race; I would expect even less of an overlap between exceptional performers on the criteria there. The relation may be more similar to that between grip strength and biathlon.

With regards to the childhood testing, it seems clear that exceptional childhood athletes at age 10 would have variations (Various hormonal dis-balances?) that are detrimental to the adult performance.

Note that none of this argument contradicts existence of correlations. Bulk of measured correlation comes from the values close to the mean, and it is the case that all the exceptional performers at age 20 were very good at age 10. Just not as good as the phenotypes which make use of the trade-offs.

edit: See also: Spearman's law of diminishing returns ; among the high IQ range, the correlation between different traits decreases. It is thus not surprising that at high IQ range, the correlation between the skills implicated in IQ test and skills implicated in, for example, theoretical physics, would decrease.

Wednesday, August 1, 2012

Download pages back online

The download pages (for those who purchased Polynomial via Plimus) should be back online. Unfortunately I lost the list of manual activations for the Steam customers, but I'm working on better integration with Steam anyway and it will be possible to download Linux version if you bought Polynomial from Steam.

Tuesday, July 31, 2012

Ion chamber

Ion chamber is an ionizing radiation detector that detects the radiation by measuring electrical conductivity of air (or other gas). I’ve built one:


In that video, I used old radium clock to test it.

 I wrapped the clock in plastic to ensure that the radioactive dust would not get out in the event that I accidentally drop it and break the glass - cleaning up radioactive dust can spoil your entire day.

For those into electronics, I am using LMP7721 amplifier and 100 TOhm resistor (for which I thank anonymous contributor – it is very hard to find those resistors!).

The circuit is basically the one from LMP7721 example datasheet, other resistors are for input protection, etc. I’ve soldered stuff onto prototype-ish board, which I soldered to flaps bent from the can that used to contain peanuts.

The chamber is formed by the can and the wire inside. The other can that used to contain peas is used as shielding (I opened it up for the picture). I simply measure the output with digital voltmeter. Besides radium clocks, this thing can detect 1kg bag of potassium salt.

The advantage of ion chamber over Geiger counter is that it works over the entire ionizing radiation range, with the response more closely matching that of the body. (Geiger counters are very bad at counting soft x-rays.) Other advantage is that it is just a ton simpler, requires no vacuum, can test for radon directly, etc. I’m planning to build another one using a switch to keep the measurement circuitry disconnected from the ion chamber electrode except for the times when measurements are taken. That should allow higher accuracy, and permit me to use more ordinary amplifier, like TL072 .