Biological systems are inherently imperfect — such as mutations. Imperfections are the driving force of our evolution. If AI catches a glitch in its algorithm, will it be able to understand if the glitch is an error, or if the glitch will instead lead to an unexpected eureka moment?
I think the issue here is that humans make judgments, so you might write something weird or by accident or whatever, and then you look at it and decide that what you wrote was good and you pursue it. Humans judge their actions and make subsequent decisions, and computers would also need this judgment part when it comes to understanding if something is a mistake or a “eureka moment” like you said. However, I don’t think there is anything that can prevent the computer from learning how to make judgments.
There is an area of machine learning that has been around for a long time called reinforcement learning, where programs learn by interacting with the environment. The environment is not predefined, there is nothing that dictates what is right or wrong, but the environment drives the outcome. So, depending on what the program learned in the environment, it may learn what is right or wrong or good or bad. Again, reinforcement learning can help open up avenues for computers to self-teach themselves and to understand judgements, which is a very interesting area.