I never thought of the training data that way before.
It suggests that given the training data may have biases, and the machine will output as a result of the data, the machine will output with biases.
The machine will end up a consensus of opinion rather than accurate. Give it training data full of conspiracy theories and it will output that we didn't land on the Moon.
This is also means that these early machines all have an opinion, rather than a set of verifiable facts.
Could be that even when they seek their own data that they'll still end up with an opinion rather than a truth.
Very interesting. Thank you for responding.