Large data systems will contain the biases that exist within their training data.
Your article was a superb summary of the big data approach to AI.
Ethics wise, well I think the results from the machine will incorporate whatever bias was in the training data.
And the trouble with the training data is that often bias is invisible to those who have the same bias, so the cleaners of the data wouldn't be able to remove the bias.
Also bias may not be visible in small datasets, but once the dataset is massive, the bias becomes apparent.
So if a culture for example thinks that bananas are evil, when their system also outputs results that reflect this belief, they won't see it as an error. In fact they'll see it as operating correctly.
And my concern about this is this: One day these type of systems will be involved in Government level decisions.
If each system carries the same cultural biases as the country its data came from, then we can't rely on the robot overlords to be balanced saviours.
We'll end up with countries running these mega brains and then continuing to fight with each other because even the machine will have the agenda of the country at heart.
With luck I'm wrong. I just hate the idea that one day we'll invent superintelligence, then we'll use it to continue being chimpanzees throwing our processed food at each other like a bunch of tribal idiots.