You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While trying to test #110217, both to provide feedback and to test its integration into our distribution (as well as meson), I've built LLVM from main (concretely 87bfa58). Due to the constraints of building on public CI (as well as for other concerns), we're building and packaging LLVM in various stages (compare here).
The main issue I wanted to raise is that I haven't managed to build flang 20 (and only flang; i.e. with clang, llvm, mlir etc. already pre-built) in azure pipeline's free CI; it runs out of memory on an agent with 7-8GB main memory plus a 16GB swapfile. I can try switching from -j2 (implicitly) to -j1, but then I'm afraid the build will run into timeouts, as we're already pushing the hard 6h timout. While flang has always been a handful to build, I consider this a regression in the broader sense since v19. CC @banach-space@sscalpone
This is perhaps another case where relanding #95672 would be beneficial (compare also discussion in #112789).
The text was updated successfully, but these errors were encountered:
Hi, I wonder what compiler/linker combo you're using to build LLVM here? Building with cl.exe/link.exe is significantly more resource intensive than with clang-cl/lld-link in my experience.
If building with clang-cl on the azure CI pipelines is possible you could try that? I would expect 8gb ram to be enough for 2 build threads with that combo (speaking from my own experience).
I agree that re-landing that patch could be helpful for these cases, especially where building with cl.exe is a requirement.
We originally built flang with clang-cl, but that had run into #86459, causing us to switch back to MSVC (which fixed things). I can try if switching back to clang-cl works better nowadays.
OTOH, the fact that v19 was buildable with that and main appears not to me still points at some kind of regression IMO (though in the most benign interpretation, it's simply due to continued growth of the flang codebase).
I regularly build on Windows with clang-cl these days and try to make sure breakage gets fixed when it comes up. It's generally very reliable.
Unfortunately the most recent VS update has broken building with clang-cl again, due to a bug in std::variant in the Microsoft STL that for some reason cl doesn't hit...
I wouldn't generally expect it to be less reliable than building with cl.exe though, especially once we have more reliable CI set up doing builds with clang-cl (which I'm also working on).
I find building with cl difficult for similar reasons to you; it just uses an awful lot of ram. I do have a couple of ideas that I want to investigate to solve this, something that would really help is if you have any kind of data about which files specifically usually crash the process? If you have some info on specific files that are causing more issues than most, it would give me a really good starting point for investigating this.
While trying to test #110217, both to provide feedback and to test its integration into our distribution (as well as meson), I've built LLVM from main (concretely 87bfa58). Due to the constraints of building on public CI (as well as for other concerns), we're building and packaging LLVM in various stages (compare here).
The main issue I wanted to raise is that I haven't managed to build flang 20 (and only flang; i.e. with clang, llvm, mlir etc. already pre-built) in azure pipeline's free CI; it runs out of memory on an agent with 7-8GB main memory plus a 16GB swapfile. I can try switching from
-j2
(implicitly) to-j1
, but then I'm afraid the build will run into timeouts, as we're already pushing the hard 6h timout. While flang has always been a handful to build, I consider this a regression in the broader sense since v19. CC @banach-space @sscalponeThis is perhaps another case where relanding #95672 would be beneficial (compare also discussion in #112789).
The text was updated successfully, but these errors were encountered: