I never thought of the training data that way before.

It suggests that given the training data may have biases, and the machine will output as a result of the data, the machine will output with biases.

The machine will end up a consensus of opinion rather than accurate. Give it training data full of conspiracy theories and it will output that we didn't land on the Moon.

This is also means that these early machines all have an opinion, rather than a set of verifiable facts.

Could be that even when they seek their own data that they'll still end up with an opinion rather than a truth.

Very interesting. Thank you for responding.



Seems to me that by the time we need to worry about it, we'll have the tech to terraform any planet we want or bioform ourselves to suit any planet we choose.

And if we can make it this far into the future, our lonely original planet won't mean a whole lot given how far humanity would have spread.

Unless of course we kick off the whole process, then civilisation collapses, and a bunch of cave humans notice some really weird stuff going on in their sky, and we save them with a process that to them is the work of gods.

Would make a cool book.



Darren Hughes

Life has so many questions. So many issues. So much potential. I occasionally have thoughts that might help you. I hope I can. Peace Out humans.