So the question then is, are biases inherent? Or do programmers loan their biases to the computers?
That's an interesting question. I'd have to see the programming before, and after to really guess at the answer. There is the possibility for Rounding Creep. Do you round up, or down? Let's say we've got the proverbial Solomon judgement to cut the baby in half. Which parent gets the bigger half, or how do we make sure that the child is truly bisected, equally?
That's the kind of cold calculation you expect from completely unbiased Computer programming. Over time, continuing to round up will add up, especially with a self learning iterative loop (Self referrential critical thinking, Hofstadter Loop) The programmer may instruct the AI to err on the side of caution, but which side is that? In every case? We can't have it flip alternately, or RNG to randomly round up, or round down, because then the decision may ultimately be as logical as flipping a coin. However, with each iteration, that minor bias may reinforce, and after 100 iterations, have 1,000 times the initial bias. That's the way thse things work.
So, that effect of compounding bias, where each decision is based on the precedent of similar decisions before it can add up, mutiplicatively when you start compounding similarities in certain cases is in fact intrinsic. That's not critical thinking, that's cold hard logic, and such a program might be confused when one of the parents gives up her child, rather than have it cut in half.