LLVM is gradually transitioning from the `getelementptr` instruction to
a new `ptradd` instruction. The latter instruction doesn't actually
exist yet, but for now, LLVM is considering `getelementptr i8` to be
equivalent. LLVM is already internally canonicalizing `getelementptr`
usages to this pattern in many cases, and it's far easier for us to emit
that, so... let's do so!
For runtime indexing this does sometimes require an explicit
multiplication to scale an index to a byte offset. The helper function
`llvm.FuncGen.ptraddScaled` makes this common pattern more convenient.
A particularly nice side effect from this is that after removing some
dead code (left over from before we made all `struct`s etc by-ref), it
has eliminated the need to maintain that nasty mapping between Zig field
indices and LLVM field indices. `FuncGen` no longer cares at all how
aggregate types are lowered!
Slices are still by-val at least for now, but they never lived in that
mapping because their structure is simple and consistent (they always
have a pointer at field index 0 and a usize at field index 1, with no
explicit padding necessary).
I've realised that the cause of at least some of our weird CI flakiness
was a bug in how `Nav` values were resolved. Consider this scenario: the
frontend resolves the type of a `Nav`, and then sends a function to the
backend, which requires the backend to lower a pointer to that `Nav`.
The backend calls `InternPool.getNav` to determine the `Nav`'s type.
However, this races with the frontend resolving the *value* of that
`Nav`. This involves writing separately to two fields, `bits` and
`type_or_value`. If only one of these changes is observed, then the
backend will incorrectly interpret the type as the value or vice versa,
leading to a crash or even a miscompilation. (Of course, there's also
the straightforward issue that the racing loads were non-atomic, making
them illegal).
The only good solution to this was to make `Nav` 4 bytes bigger, giving
it separate `type` and `value` fields. In theory that's a quite small
change, but it ended up having a bunch of nice consequences which led to
this diff being a bit bulkier than expected:
* `Nav.Repr.Bits` was simplified, because it no longer has to track
"resolution status": we can use `.none` for that. This frees up some
bits to make things more consistent between the "type resolved" and
"fully resolved" states.
* This consistency allowed the `Nav.status` union to be replaced with a
simpler field `Nav.resolved`, which is a bit nicer to work with.
* Most of the "getter" functions were able to be removed from `Nav`
because the state they were fetching had been moved to simple fields
on `Nav.resolved`.
* There were still a handful of free bits in `Nav.Repr.Bits`, which
could be used to represent the "const" and "threadlocal" flags rather
than these being stored on `Key.Extern` and `Key.Variable`. This is a
bit more convenient for linkers.
* With those bits gone, `Key.Variable` is a trivial wrapper around a
type and an initial value, and the fact that a declaration is mutable
can be represented solely through the "const" flag. Therefore,
`Key.Variable` no longer served a purpose, and could be eliminated
entirely in favour of storing the variable's initial value directly in
the "value" field of the `Nav`.
So, I'm quite pleased with this refactor! But anyway, regarding the bug
fix which actually motivated this: if I've done my job correctly, this
should solve some crashes, such as these (which were what tipped me off
to this bug in the first place):
https://codeberg.org/ziglang/zig/actions/runs/2306/jobs/7/attempt/1https://codeberg.org/ziglang/zig/actions/runs/2173/jobs/6/attempt/1
...and, who knows, perhaps even the random SIGSEGVs we've seen on some
targets! Probably not, but one can hope.
This is a follow-up to PR #30053 / commit 484cc15366.
The code previously did not handle `c_longdouble`, whose size depends on
the target. A `floatSignificandBits` helper function and a smoke test
were added.
Also added the missing max int value to the exhaustive `f16` test cases.
The goal of these changes is to allow the C backend to support the new
lazier type resolution system implemented by the frontend. This required
a full rewrite of the `CType` abstraction, and major changes to the C
backend "linker".
The `DebugConstPool` abstraction introduced in a previous commit turns
out to be useful for the C backend to codegen types. Because this use
case is not debug information but rather general linking (albeit when
targeting an unusual object format), I have renamed the abstraction to
`ConstPool`. With it, the C linker is told when a type's layout becomes
known, and can at that point generate the corresponding C definitions,
rather than deferring this work until `flush`.
The work done in `flush` is now more-or-less *solely* focused on
collecting all of the buffers into a big array for a vectored write.
This does unfortunately involve a non-trivial graph traversal to emit
type definitions in an appropriate order, but it's still quite fast in
practice, and it operates on fairly compact dependency data. We don't
generate the actual type *definitions* in `flush`; that happens during
compilation using `ConstPool` as discussed above. (We do generate the
typedefs for underaligned types in `flush`, but that's a trivial amount
of work in most cases.)
`CType` is now an ephemeral type: it is created only when we render a
type (the logic for which has been pushed into just 2 or 3 functions in
`codegen.c`---most of the backend now operates on unmolested Zig `Type`s
instead). C types are no longer stored in a "pool", although the type
"dependencies" of generated C code (that is, the struct, unions, and
typedefs which the generated code references) are tracked (in some
simple hash sets) and given to the linker so it can codegen the types.
Because of packed structs, checking whether a type is extern-compatible
requires that its layout be resolved. For functions to do this
validation as soon as the function type is created would lead to
dependency loops in cases like '*const fn (*@This()) void callconv(.c)`.
Therefore, when creating a function *type*, we no longer perform this
check immediately, instead waiting until the function is called.
Now that https://github.com/ziglang/zig/issues/24657 has been
implemented, the compiler can simplify its internal representation of
comptime-known `packed struct` and `packed union` values. Instead of
storing them field-wise, we can simply store their backing integer
value. This simplifies many operations and improves efficiency in some
cases.
...and rework some of the incremental reference tracking. Almost all
kinds of AnalUnit have one property in common: they might never be
referenced in any update despite conceptually "existing", in which case
we don't want to waste time semantically analyzing them. As of the lazy
type resolution introduced in this commit, the only units to which this
does not apply are `memoized_state` and `@"comptime"`. Previously, I had
a somewhat hacky system in `Zcu` for dealing with this, but I now have a
better understanding of the design incremental compilation is converging
on, so can implement a better solution. By finding a few unused bits
lying around (...or making them), we can represent a single bit of state
indicating whether something's corresponding units have ever been
referenced. This is akin to the units being in `Zcu.outdated`, with the
key difference being that the compiler will *not* attempt to analyze
units which are in this state. Once they are first referenced or
depended on, the flag is set to true and the unit is added to `outdated`
so that it can participate in the normal dependency resolution logic.
It is always a bug in Sema to check whether an IES is resolved. This is
because whether the IES is resolved depends on whether the function
which owns it has been analyzed yet, which depends on the order the
compiler analyzes declarations in, which it is incorrect to have any
dependency on. Instead, we must always either not look at the resolved
set, or resolve it first (with `Sema.ensureFuncIesResolved`) and then
look at the definitely-resolved concrete error set.
Luckily, removing a bunch of the buggy logic which tried to
opportunistically use already-resolved inferred error sets actually
didn't regress anything! It seems this logic was mostly left over from
before Andrew reworked inferred error sets, and had become essentially
dead code. This is because inferred error sets are stricter than they
used to be, and in particular, we make no attempt to support mutual
recursion.
I suspect that most of the logic touching IESes can be simplified even
further than I have done here without regressing any existing code; my
goal in this commit was just to remove any *buggy* code I could find.
Eliminate the `std.Thread.Pool` used in the compiler for concurrency and
asynchrony, in favour of the new `std.Io.async` and `std.Io.concurrent`
primitives.
This removes the last usage of `std.Thread.Pool` in the Zig repository.
The new builtins are:
* `@EnumLiteral`
* `@Int`
* `@Fn`
* `@Pointer`
* `@Tuple`
* `@Enum`
* `@Union`
* `@Struct`
Their usage is documented in the language reference.
There is no `@Array` because arrays can be created like this:
if (sentinel) |s| [n:s]T else [n]T
There is also no `@Float`. Instead, `std.meta.Float` can serve this use
case if necessary.
There is no `@ErrorSet` and intentionally no way to achieve this.
Likewise, there is intentionally no way to reify tuples with comptime
fields, or function types with comptime parameters. These decisions
simplify the Zig language specification, and moreover make Zig code more
readable by discouraging overly complex metaprogramming.
Co-authored-by: Ali Cheraghi <alichraghi@proton.me>
Resolves: #10710
The changes to `codegen.c` are blatant hacks, but the problem they work
around isn't a regression: it's an existing miscompilation. This branch
happened to *expose* that miscompilation in more cases by changing how
an incorrect result is *used*.
I started this diff trying to remove a little dead code from the C
backend, but ended up finding a bunch of dead code sprinkled all over
the place:
* `packed` handling in the C backend which was made dead by `Legalize`
* Representation of pointers to runtime-known vector indices
* Handling for the `vector_store_elem` AIR instruction (now removed)
* Old tuple handling from when they used the InternPool repr of structs
* Straightforward unused functions
* TODOs in the LLVM backend for features which Zig just does not support
The "completed" count in the "Semantic Analysis" progress node had
regressed since 0.14.0: the number got crazy big very fast, even on
simple cases. For instance, an empty `pub fn main` got to ~59,000 where
on 0.14 it only reached ~4,000. This was happening because I was
unintentionally introducing a node every time type resolution was
*requested*, even if (as is usually the case) it turned out to already
be done. The fix is simply to start the progress node a little later,
once we know we are actually doing semantic analysis. This brings the
number for that empty test case down to ~5,000, which makes perfect
sense. It won't exactly match 0.14, because the standard library has
changed, and also because the compiler's progress output does have some
*intentional* changes.
This commit replaces the "fuzzer" UI, previously accessed with the
`--fuzz` and `--port` flags, with a more interesting web UI which allows
more interactions with the Zig build system. Most notably, it allows
accessing the data emitted by a new "time report" system, which allows
users to see which parts of Zig programs take the longest to compile.
The option to expose the web UI is `--webui`. By default, it will listen
on `[::1]` on a random port, but any IPv6 or IPv4 address can be
specified with e.g. `--webui=[::1]:8000` or `--webui=127.0.0.1:8000`.
The options `--fuzz` and `--time-report` both imply `--webui` if not
given. Currently, `--webui` is incompatible with `--watch`; specifying
both will cause `zig build` to exit with a fatal error.
When the web UI is enabled, the build runner spawns the web server as
soon as the configure phase completes. The frontend code consists of one
HTML file, one JavaScript file, two CSS files, and a few Zig source
files which are built into a WASM blob on-demand -- this is all very
similar to the old fuzzer UI. Also inherited from the fuzzer UI is that
the build system communicates with web clients over a WebSocket
connection.
When the build finishes, if `--webui` was passed (i.e. if the web server
is running), the build runner does not terminate; it continues running
to serve web requests, allowing interactive control of the build system.
In the web interface is an overall "status" indicating whether a build
is currently running, and also a list of all steps in this build. There
are visual indicators (colors and spinners) for in-progress, succeeded,
and failed steps. There is a "Rebuild" button which will cause the build
system to reset the state of every step (note that this does not affect
caching) and evaluate the step graph again.
If `--time-report` is passed to `zig build`, a new section of the
interface becomes visible, which associates every build step with a
"time report". For most steps, this is just a simple "time taken" value.
However, for `Compile` steps, the compiler communicates with the build
system to provide it with much more interesting information: time taken
for various pipeline phases, with a per-declaration and per-file
breakdown, sorted by slowest declarations/files first. This feature is
still in its early stages: the data can be a little tricky to
understand, and there is no way to, for instance, sort by different
properties, or filter to certain files. However, it has already given us
some interesting statistics, and can be useful for spotting, for
instance, particularly complex and slow compile-time logic.
Additionally, if a compilation uses LLVM, its time report includes the
"LLVM pass timing" information, which was previously accessible with the
(now removed) `-ftime-report` compiler flag.
To make time reports more useful, ZIR and compilation caches are ignored
by the Zig compiler when they are enabled -- in other words, `Compile`
steps *always* run, even if their result should be cached. This means
that the flag can be used to analyze a project's compile time without
having to repeatedly clear cache directory, for instance. However, when
using `-fincremental`, updates other than the first will only show you
the statistics for what changed on that particular update. Notably, this
gives us a fairly nice way to see exactly which declarations were
re-analyzed by an incremental update.
If `--fuzz` is passed to `zig build`, another section of the web
interface becomes visible, this time exposing the fuzzer. This is quite
similar to the fuzzer UI this commit replaces, with only a few cosmetic
tweaks. The interface is closer than before to supporting multiple fuzz
steps at a time (in line with the overall strategy for this build UI,
the goal will be for all of the fuzz steps to be accessible in the same
interface), but still doesn't actually support it. The fuzzer UI looks
quite different under the hood: as a result, various bugs are fixed,
although other bugs remain. For instance, viewing the source code of any
file other than the root of the main module is completely broken (as on
master) due to some bogus file-to-module assignment logic in the fuzzer
UI.
Implementation notes:
* The `lib/build-web/` directory holds the client side of the web UI.
* The general server logic is in `std.Build.WebServer`.
* Fuzzing-specific logic is in `std.Build.Fuzz`.
* `std.Build.abi` is the new home of `std.Build.Fuzz.abi`, since it now
relates to the build system web UI in general.
* The build runner now has an **actual** general-purpose allocator,
because thanks to `--watch` and `--webui`, the process can be
arbitrarily long-lived. The gpa is `std.heap.DebugAllocator`, but the
arena remains backed by `std.heap.page_allocator` for efficiency. I
fixed several crashes caused by conflation of `gpa` and `arena` in the
build runner and `std.Build`, but there may still be some I have
missed.
* The I/O logic in `std.Build.WebServer` is pretty gnarly; there are a
*lot* of threads involved. I anticipate this situation improving
significantly once the `std.Io` interface (with concurrency support)
is introduced.
added adapter to AnyWriter and GenericWriter to help bridge the gap
between old and new API
make std.testing.expectFmt work at compile-time
std.fmt no longer has a dependency on std.unicode. Formatted printing
was never properly unicode-aware. Now it no longer pretends to be.
Breakage/deprecations:
* std.fs.File.reader -> std.fs.File.deprecatedReader
* std.fs.File.writer -> std.fs.File.deprecatedWriter
* std.io.GenericReader -> std.io.Reader
* std.io.GenericWriter -> std.io.Writer
* std.io.AnyReader -> std.io.Reader
* std.io.AnyWriter -> std.io.Writer
* std.fmt.format -> std.fmt.deprecatedFormat
* std.fmt.fmtSliceEscapeLower -> std.ascii.hexEscape
* std.fmt.fmtSliceEscapeUpper -> std.ascii.hexEscape
* std.fmt.fmtSliceHexLower -> {x}
* std.fmt.fmtSliceHexUpper -> {X}
* std.fmt.fmtIntSizeDec -> {B}
* std.fmt.fmtIntSizeBin -> {Bi}
* std.fmt.fmtDuration -> {D}
* std.fmt.fmtDurationSigned -> {D}
* {} -> {f} when there is a format method
* format method signature
- anytype -> *std.io.Writer
- inferred error set -> error{WriteFailed}
- options -> (deleted)
* std.fmt.Formatted
- now takes context type explicitly
- no fmt string