Affects Version/s: None
Fix Version/s: None
I'm using clojure.core.typed 0.2.19 with the slim classifier but I've observed the same without slim.
My suspicion is that the latent filters associated with functions grow in size exponentially with each extra optional key to a HMap (based on the output when you have a type error). I think it's generating all combinations of present and absent keys for the HMap when calculating latent filters for a function.
I've attached a tarball with a lein project with ten namespaces that all contain the same ten simple functions in the form
The type annotations vary in the number of optional keywords.
(test-hmap.core/go) checks all the namespaces. The time to check each namespace grows non-linearly. The first namespace gets penalised by core.typed initialisation the first time it's run.
E.g. on my local machine:
[ Project and file with one-lines (excluding type definitions) that kills (check-ns).
I hope this one-liner helps making the issues easy to reproduce.
Currently I don't know a work-around for this issue,
so I can not use core.typed to check my project.
I guess that many people/projects should run into the same issues,
as the pattern is returning quite often in clojure:
- start with data-set with small Hashmaps
- the hashmaps evolve in a number steps (meaning more keys are added and some of the existing keys are removed)
I guess the workaround would be to use limited or no optional keys, and prepare a custom HMap definition for each stage of analysis. Although this would work, the amount of bookkeeping already significant for a all records are running through the same stages (linear process), and will be a showstopper if it is a non-linear process (not all records follow the same path (sequence of analysis stages)).
|Priority||Major [ 3 ]||Blocker [ 1 ]|
|Status||Open [ 1 ]||Resolved [ 5 ]|
|Resolution||Completed [ 1 ]|
|Status||Resolved [ 5 ]||Closed [ 6 ]|