Read response to Daniel Wallace by Celia Santander Esq.
This is neither a legal conclusion nor a philosophical statement of bias but merely conjecture for thought.
Suppose all the programs ever released under the GPL are placed in a non-commercial public domain trust fund.
There are a few million lines of copyrighted code in a Linux distribution. Add all the code at SourceForge. How many lines now? 100 million?
How about everything out there? Technology is growing exponentially. How many lines in ten years? 10 billion?
How many in 20 years? 100 billion?
One requirement to show infringement is access to the code. If it's publicly available then that element of infringement is a foregone conclusion.
What does a commercially motivated original author do to check that the ever larger growing pool of code structures that exist out there in non-commercial GPL trust are not "substantially similar" to to his code? Is he infringing?
How long does it take to check with ESR's code comparator program? ESR says 2 million lines per minute on a 1.8 GHZ Intel Box. The implementation is O(n(log n)).
So:
(100E9)log(100E9) = 25.3 x 100E9.
So 2.53E12 / 2E6 = # of minutes = 1.26 million minutes. (check the math)
That's about 2.4 years to find out if you have any substantially similar code that infringes on the exponentially growing pool of GPL'd code. Of course the GPL code pool grew during that 2.4 years.
Is that practical or desirable? Are we wishing for something we really don't want?