I implemented a different approach to this problem in https://github.com/thheller/shadow-build. It has been working well for me so let me explain what I do.
1) The user specifies all source paths
Each source path is scanned for .cljs and .js files. JS files are just scanned for goog.provide/require statements via regexp, CLJS files assume the NS form is first and is scanned as such to extract the require/provides. No further analysis is done.
2) Out of all this information a dependency graph is built (plus a lookup index, eg. where can I find goog.string).
3) The user specifies his "main" namespaces. A main is simply a namespace that either exports functions for outside use or runs code on its own behalf. Those main namespaces are sorted in topological order so we have the entire graph needed for the compilation. If a namespace is not found required but not provided I do not attempt to build anything but throw an exception.
4) All files are "compiled" in reverse order. Since only CLJS requires compilation, JS files are skipped. CLJS compilation is also a lot "safer" than cljs.closure since dependencies are "guaranteed" to exist (eg. can't compiled a file we don't have all requires for, which may or may not be ideal).
5) Optionally everything may be grouped into modules and either flushed unoptimized and fed to the closure optimizer and then output to disk.
Although I must admit I never used the ```:libs``` directive I'm pretty sure its not needed by using this approach, just put it into a "source path" and let the regexp scan it. As long as it has goog.provide/require its good to go. You can even make it a "main" by just specifying its goog.provide, not required to make a CLJS main.
I've been pretty busy lately and shadow-build is missing some cleanup I want to do but its working very well for my project (> 10k CLJS LOC).
Maybe some of the work I did could help here.