I can't say that I understand the issue fully. When the AbstractDependencyInfo returns it's unresolved requirements - it is an unordered set. So the order of the requirements is not deterministic.
In AS we currently distribute these bundles
jboss-osgi-blueprint.jar
jboss-osgi-common-core.jar
jboss-osgi-common.jar
jboss-osgi-http.jar
jboss-osgi-husky.jar
jboss-osgi-jaxb.jar
jboss-osgi-jmx.jar
jboss-osgi-jndi.jar
jboss-osgi-reflect.jar
jboss-osgi-webapp.jar
jboss-osgi-webconsole.jar
jboss-osgi-xerces.jar
jboss-osgi-xml-binding.jar
org.apache.aries.jmx.jar
org.apache.aries.util.jar
org.apache.felix.eventadmin.jar
org.apache.felix.log.jar
org.osgi.compendium.jar
They come with mandatory/optional and of course dynamic requirements. One possibility could be to add a few "special" log messages that write an audit log to a dedicated appender. The test case could extract the module + caps + reqs topology from the log and build up the metadata from it.
I believe, it is a resolve time problem only - so there should be no actual class load needed.
The standalone Runtime BTW resolves these bundles in a timely manner even without the cache. One reason could be that the compendium and a few other key bundles are installed/resolved first before the others come in. Another reason is that we only deal with OSGi modules, which is unlike in AS where we have very many modules with an unknown set of caps/reqs.
I might also add, that I'm sure that the algorithm (without the cache) does not run into an endless loop after all. I'm fairly sure that the cache would prevent that.