My changes to how incremental compilation handles container types mean
that, at least for now, it is possible for the ZIR `.main_struct_inst`
of a source file to be lost (this happens if the number of top-level
fields in a file changes for instance). I missed a few things which
needed changing to account for this, which could lead to crashes with
certain (trivial) changes---oops!
Adds two new incremental test cases. They are currently disabled for
wasm32-wasi-selfhosted because they both trigger a crash in the WASM
backend.
I previously wrote some weird code in the compiler frontend solely
because the LLVM backend has some weird requirements, but the better
solution is to avoid those requirements. This commit does that by
introducing "alignment forward references" to `std.zig.llvm.Builder`.
Much like debug forward references, they allow you to reference an
alignment value which will be populated at a later time (and which can
be updated many times, which is important for incremental compilation).
Then, when we want to reference a type's ABI alignment while the type is
not necessarily resolved (required for `@"align"` attributes on function
parameters and function call arguments), we create a forward reference
and use `link.ConstPool` to populate it when ready.
This allows us to remove from the compiler frontend some extremely
arbitrary calls to `Sema.ensureLayoutResolved`, so that the language
specification is not being built around the particular needs of our
compiler implementation's LLVM code generation backend.
The change in codegen/x86_64/CodeGen.zig was not strictly necessary (the
Sema change I did solves the error I was getting there), I just think
it's better style anyway.
I was trying out combining struct layout resolution with resolution of
default field values, but it broke a few cases which it's not clear we
want to break. The simplest such case was a struct with a field which
was a slice of itself, with a default value of `&.{}`.
So, at least for now, I'm accepting defeat and splitting this back out.
This allows a couple of behavior tests which were removed to be
re-introduced---I will do that in the commit following this one.
I have *not* made this separate phase of resolution "lazy": instead, it
is tied to layout resolution, in the sense that if a struct's layout is
referenced, then its default field values are also referenced. I chose
this approach for simplicity---not of the implementation (it's actually
slightly *more* code to do it this way!), but in terms of the language
specification. I think this behavior is easier to understand and keep in
your head. It can be easily changed in future if we decide we want to.
This partially reverts the commit titled "compiler: merge struct default
value resolution into layout resolution".
The LLVM backend can now run the behavior tests and standard library
tests, like the x86_64 backend can. This commit required me to make a
lot of changes to how the LLVM backend lowers debug information, and
while I was doing that, I improved a few things:
* `anyerror` is now an enum type (and other error sets just wrap it), so
error values appear by name in debuggers
* Fixed broken lowering for tagged unions with zero-width payloads
* Associate container types with source locations in all cases
* Avoid depending on the order of type resolution (using the new
`DebugConstPool` abstraction), so debug information will contain all
available type information rather than just the subset which happens
to be resolved when the backend lowers that debug type
Introduces a small abstraction, `link.DebugConstPool`, to deal with
lowering type/value information into debug info when it may not be known
until type resolution (which in some cases will *never* happen). It is
currently only used by self-hosted DWARF logic, but it will also be of
use to the LLVM backend (which is my next focus).
This actually doesn't cause any dependency loops in std, which is pretty
much my benchmark for it being acceptable. This can be reverted if it
turns out to be problematic, but for now, let's err on the side of
language simplicity.
To be clear, this *does* regress some cases which previously worked: I
will have to remove some behavior tests as a result of this commit. To
be honest, the tests which look to be failing as a result of this are
things which I think are generally unadvisable; I actually reckon a bit
more friction to use default field values in non-trivial ways might be a
good thing to stop people from misusing them as much. Struct fields
should very rarely have default values; about the only common situation
where they make sense is "options" structs.
Now that https://github.com/ziglang/zig/issues/24657 has been
implemented, the compiler can simplify its internal representation of
comptime-known `packed struct` and `packed union` values. Instead of
storing them field-wise, we can simply store their backing integer
value. This simplifies many operations and improves efficiency in some
cases.
...and rework some of the incremental reference tracking. Almost all
kinds of AnalUnit have one property in common: they might never be
referenced in any update despite conceptually "existing", in which case
we don't want to waste time semantically analyzing them. As of the lazy
type resolution introduced in this commit, the only units to which this
does not apply are `memoized_state` and `@"comptime"`. Previously, I had
a somewhat hacky system in `Zcu` for dealing with this, but I now have a
better understanding of the design incremental compilation is converging
on, so can implement a better solution. By finding a few unused bits
lying around (...or making them), we can represent a single bit of state
indicating whether something's corresponding units have ever been
referenced. This is akin to the units being in `Zcu.outdated`, with the
key difference being that the compiler will *not* attempt to analyze
units which are in this state. Once they are first referenced or
depended on, the flag is set to true and the unit is added to `outdated`
so that it can participate in the normal dependency resolution logic.
Importantly, adds ability to get Clock resolution, which may be zero.
This allows error.Unexpected and error.ClockUnsupported to be removed
from timeout and clock reading error sets.
This error is actually only ever directly returned from `std.posix.getcwd` (and only on POSIX systems, so never on Windows). Its inclusion in almost all of the error sets its currently found in is a leftover from when `std.fs.path.resolve` called `std.process.getCwdAlloc` (https://github.com/ziglang/zig/issues/13613).
use the application's Io implementation where possible. This correctly
makes writing to stderr cancelable, fallible, and participate in the
application's event loop. It also removes one more hard-coded
dependency on a secondary Io implementation.
Eliminate the `std.Thread.Pool` used in the compiler for concurrency and
asynchrony, in favour of the new `std.Io.async` and `std.Io.concurrent`
primitives.
This removes the last usage of `std.Thread.Pool` in the Zig repository.
Apple's own headers and tbd files prefer to think of Mac Catalyst as a distinct
OS target. Earlier, when DriverKit support was added to LLVM, it was represented
a distinct OS. So why Apple decided to only represent Mac Catalyst as an ABI in
the target triple is beyond me. But this isn't the first time they've ignored
established target triple norms (see: armv7k and aarch64_32) and it probably
won't be the last.
While doing this, I also audited all Darwin OS prongs throughout the codebase
and made sure they cover all the tags.
I started this diff trying to remove a little dead code from the C
backend, but ended up finding a bunch of dead code sprinkled all over
the place:
* `packed` handling in the C backend which was made dead by `Legalize`
* Representation of pointers to runtime-known vector indices
* Handling for the `vector_store_elem` AIR instruction (now removed)
* Old tuple handling from when they used the InternPool repr of structs
* Straightforward unused functions
* TODOs in the LLVM backend for features which Zig just does not support
`std.Io.tty.Config.detect` may be an expensive check (e.g. involving
syscalls), and doing it every time we need to print isn't really
necessary; under normal usage, we can compute the value once and cache
it for the whole program's execution. Since anyone outputting to stderr
may reasonably want this information (in fact they are very likely to),
it makes sense to cache it and return it from `lockStderrWriter`. Call
sites who do not need it will experience no significant overhead, and
can just ignore the TTY config with a `const w, _` destructure.
Far simpler, because everything which `crash_report.zig` did is now
handled pretty well by `std.debug` anyway. All we want is to print some
context around panics and segfaults. Using the new ability to override
the default segfault handler while still having std handle the
target-specific bits for us, that's really simple.