You do pose an interesting question, Mal; why do people just not trust computers? Quite possibly Hollywood has little to do with it, and Redmond has everything to do with it!
But aside from that, aqssuming we did make a more reliable computer, there is still going to be some resistance to the idea. I believe the primary reason is that computers are machines, and machines do certain specific jobs for which they are designed. When circumstances exceed the design parameters of a machine, the machine usually has something bad happen to it, and generally that translates to the passengers too. However, a human has a much more flexible "programming", and can respond to a much greater range of things going wrong.
Nowadays, it's not a bad idea to have BOTH a good AI and a human, to be able to detect and rectify an even wider range of bad things happening.
Can a human go nuts? Sure, but those we entrust our safety to have given us reasonable assurance that things like that won't happen, and they are backed up by the 99.99999 or so percent of humans who have not had their brains crash, or at least schedule their crashes for times when my safety is not in their hands.
I do not 100% trust a human either, btw. All through my one and only helicopter ride, or on any of my few airplane rides, I wonder if something bad is going to happen. Though I suppose my worry was more that there would be a mechanical problem.
Anyway, I trust more in a human than in a computer (aside from the Redmond issue) because if he screws up, a human can be punished. What can you do to a computer to punish it? (I'd really like to know, so I can threaten MY computers with it to get them to behave!) A bad enough offense can result in death; computers do not fear death. They cannot be killed; you can copy them and run them elsewhere. What's to fear? At the worst, you can tell it it will have to waste some time catching up on some news, or that it will lose a few memories.
I suppose it goes back to something I mentioned earlier in the discussion, about my character not being able to figure out how to prove to a computer that he was sentient, except to spitefully destroy himself in the process of killing it.
I don't know how much of the above makes any sense, but it's what occurred to me upon reading your question.