Replies: 5 comments 3 replies
-
There is a related issue (#19951) discussing how simply indexing symbol-to-file-path mappings without precise name resolution does not significantly improve performance. |
Beta Was this translation helpful? Give feedback.
-
Serialization is planned, but mostly to improve startup performance, and not everything will be serialized, as this will have a very high disk usage. |
Beta Was this translation helpful? Give feedback.
-
I see, thanks! Still, for full-time developers who typically work on one project at a time, even tens of GBs of storage might be an acceptable tradeoff — especially with 1–2TB SSDs being common today. |
Beta Was this translation helpful? Give feedback.
-
We do cache most name resolutions and type inference results in memory, so if you e.g. do the exact same "Find references" request a second time immediately after the first, it should be pretty fast. But 1. we do not eagerly do type inference on all functions in the project, and 2. type inference results can easily be invalidated by changes to the code. (You can't partition this to "files", changes in any file can have an effect on name resolution in any other file in the crate and any dependent crates.) For point 1, I don't think doing type inference eagerly would be helpful -- you'd at best just be waiting the same time, only earlier, and probably we'd do a lot of work that is never needed. For point 2, reducing unnecessary invalidations of type inference is actually being worked on, as I understand. |
Beta Was this translation helpful? Give feedback.
-
As for caching, I understand that the immediate practical challenge is whether we can use Salsa’s persistent cache. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Does it make sense to store precise name resolution and type information in an on-disk database (e.g., SQLite).
By incrementally updating this index for edited or new files, we could speed up queries significantly, while reducing memory usage.
Has such an approach been considered? What are the technical challenges invoved?
I appreciate any insights or references to existing discussions.
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions