As machine learning algorithms become more capable of outperforming humans on some narrow tasks, they are increasingly being used to make improvements to themselves or other machine learning systems, or inputs to those systems such as hardware. In some cases, human feedback used to improve models has been replaced with AI feedback; in other cases, GPU circuits that were once designed by humans are being designed by AI systems. Some have warned that this "recursive self-improvement," if scaled up, could lead to AI spiraling beyond human control [1][2][3].
The table below collects some current examples of AI systems being used to improve AI systems. It should not be taken as an exhaustive list, since these applications can occur in many subsets of AI and we have not been able to review all recent AI papers. The "author" and "author affiliation" columns refer to the authors of the paper; the "submitter" column refers to the person who originally brought the paper to my attention. If you know of an example not mentioned here, you may submit more here.
[1] Nick Bostrom, Superintelligence
[2] Joseph Carlsmith, Is Power-seeking AI An Existential Risk?
[3] Dan Hendrycks, Natural Selection Favors AIs Over Humans