We really are in the trenches. How is this garbage #1 on the front page of *HN* right now?
Even if it was totally legitimate, the "landing page" (its design) and the headline ("Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software."?????) should discredit it immediately.
Plenty. I assumed that the code examples had been cleaned up manually, so instead I looked at a few random "Caveats, alternatives, edge cases" sections. These contain errors typically made by LLMs, such as suggesting to use features that doesn't exist (std.mem.terminated), are non-public (argvToScriptCommandLineWindows) or removed (std.BoundedArray). These sections also surfaces irrelevant stdlib and compiler implementation details.
This looks like more data towards the "LLMs were involved" side of the argument, but as my other comment pointed out, that might not be an issue.
We're used to errata and fixing up stuff produced by humans, so if we can fix this resource, it might actually be valuable and more useful than anything that existed before it. Maybe.
One of my things with AI is that if we assume it is there to replace humans, we are always going to find it disappointing. If we use it as a tool to augment, we might find it very useful.
A colleague used to describe it (long before GenAI, when we were talking about technology automation more generally) as following: "we're not trying to build a super intelligent killer robot to replace Deidre in accounts. Deidre knows things. We just want to give her better tools".
So, it seems like this needs some editing, but it still has value if we want it to have value. I'd rather this was fixed than thrown away (I'm biased, I want to learn systems programming in zig and want a good resource to do so), and yes the author should have been more upfront about it, and asked for reviewers, but we have it now. What to do?
There's a difference between the author being more upfront about it and straight-up lying on multiple locations that zero AI is involved. It's stated on the landing page, documentation and GitHub - and there might be more locations I havent' seen.
Personally, I would want no involvement in a project where the maintainer is this manipulative and I would find it a tragedy if any people contributed to their project.
> and yes the author should have been more upfront about it
They should not have lied about. That's not someone I would want to trust and support. There's probably a good reason why they decided to stay anonymous.
I literally just came across this resource a couple of days ago and was going to go through it this week as a way to get up to speed on Zig. Glad this popped up on HN so I can avoid the AI hallucinations steering me off track.
I looked into that project issue your referencing. There is absolutely zero mentioning of zig labeled blocks in that exchange. There is no misunderstanding or confusion whatsoever.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
The author could of course be lying. But why would you use AI and then very explicitly call out that you’re not using AI?
There are too many things off about the origin and author to not be suspicious of it. I’m not sure what the motivation was, but it seems likely. I do think they used the Zig source code heavily, and put together a pipeline of some sort feeding relevant context into the LLM, or maybe just codex or w/e instructed to read in the source.
It seems like it had to take quite a bit of effort to make, and is interesting on its own. And I would trust it more if I knew how it was made (LLMs or not).
Because AI content is at minimum controversial nowadays. And if you are ok with lying about authorship then It is not further down the pole to embelish the lie a bit more
I'd love it if we can stop the "Oh, this might be AI, so it's probably crap" thing that has taken over HN recently.
1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
2. Even if it were AI generated, that does not automatically make it worthless. In fact, this looks pretty decent as a resource. Producing learning material is one of the few areas we can likely be confident that AI can add value, if the tools are used carefully - it's a lot better at that than producing working software, because synthesising knowledge seen elsewhere and moving it into a new relatable paradigm (which is what LLMs do, and excel at), is the job of teaching.
3. If it's maintained or not is neither here nor there - can it provide value to somebody right now, today? If yes, it's worth sharing today. It might not be in 6 months.
4. If there are hallucinations, we'll figure them out and prove the claim it is AI generated one way or another, and decide the overall value. If there is one hallucination per paragraph, it's a problem. If it's one every 5 chapters, it might be, but probably isn't. If it's one in 62 chapters, it's beating the error rate of human writers quite some way.
Yes, the GitHub history looks "off", but maybe they didn't want to develop in public and just wanted to get a clean v1.0 out there. Maybe it was all AI generated and they're hiding. I'm not sure it matters, to be honest.
But I do find it grating that every time somebody even suspects an LLM was involved, there is a rush of upvotes for "calling it out". This isn't rational thinking. It's not using data to make decisions, its not logical to assume all LLM-assisted writing is slop (even if some of it is), and it's actually not helpful in this case to somebody who is keen to learn zig to decide if this resource is useful or not: there are many programming tutorials written by human experts that are utterly useless, this might be a lot better.
That didn't happen.
And if it did, it wasn't that bad.
And if it was, that's not a big deal.
And if it is, that's not my fault.
And if it was, I didn't mean it.
And if I did, you deserved it.
> 1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
There is, actually,
You may copy the introduction to Pangram and it will say 100% AI generated.
> 2. Even if it were AI generated, that does not automatically make it worthless.
It does make it automatically worthless if the author claims it's hand made.
How am I supposed to trust this author if they just lie about things upfront? What worth does learning material have if it's written by a liar? How can I be sure the author isn't just lying with lots of information throughout the book?
The biggest red flag for me is the author hiding their name. If you wrote quality book about a programming language you are not hiding your identity from the world.
> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
I'm not sure what they expect, but to me Zig looks very much like C with a modern standard lib and slightly different syntax. This isn't groundbreaking, not a thought paradigm which should be that novel to most system engineers like for example OCaml could be. Stuff like this alienates people who want a technical justification for the use of a language.
There is nothing new under the Sun. However, some languages manifest as good rewrites of older languages. Rust is that for C++. Zig is that for C.
Rust is the small, beautiful language hiding inside of Modern C++. Ownership isn't new. It's the core tenet of RAII. Rust just pulls it out of the backwards-compatible kitchen sink and builds it into the type system. Rust is worth learning just so that you can fully experience that lens of software development.
Zig is Modern C development encapsulated in a new language. Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions. All realtime development has to rewrite their entire standard libraries, like with the EASTL.
On top of the great standard library design, you get comptime, native build scripts, (err)defer, error sets, builtin simd, and tons of other small but important ideas. It's just a really good language that knows exactly what it is and who its audience is.
I think that describing Zig as a "rewrite of C" (good or otherwise) is as helpful as describing Python as a rewrite of Fortran. Zig does share some things with C - the language is simple and values explicitness - but at its core is one of the most sophisticated (and novel) programming primitives we've ever seen: A general and flexible partial evaluation engine with access to reflection. That makes the similarities to C rather superficial. After all, Zig is as expressive as C++.
> Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions
I think that is just a symptom of a broader mistake made by C++ and shared by Rust, which is a belief (that was, perhaps, reasonable in the eighties) that we could and should have a language that's good for both low-level and high-level programming, and that resulted in compromises that disappoint both goals.
I don't know man, Rust's borrowing semantics are pretty new under the sun, and actually do change the way you think about software. It's a pretty momentous paradigm shift.
To call Rust syntax beautiful is a stretch. It seems that way in the beginning but then quickly devolves into a monstrosity when you start doing more complex things.
Zig on the other specifically addresses syntax shortcomings in part of C. And it does it well. That claim of rust making C more safe because it’s more readable applies to Zig more than it does to Rust.
I feel like the reason the rust zealots lobby like crazy to embed rust everywhere is twofold. One is that they genuinely believe in it and the other is that they know that if other languages that address one of the main rust claims without all the cruft gains popularity they lose the chance of being permanently embdedded in places like the kernel. Because once they’re in it’s a decade long job market
> they know that if other languages that address one of the main rust claims without all the cruft gains popularity they lose the chance of being permanently embdedded in places like the kernel
First of all, I'm really opposed to saying "the kernel". I am sure you're talking about the Linux kernel, but there are other kernels (BSD, Windows etc.) that are certainly big enough to not call it "the" kernel, and that may also have their own completely separate "rust-stories".
Secondly, I think the logic behind this makes no sense, primarily because Rust at this point is 10 years old from stable and almost 20 years old from initial release; the adoption into the Linux kernel wasn't exactly rushed. Even if it was, why would Rust adoption in the Linux kernel exclude adoption of another language as well, or a switch to another, if it's better? The fact that Rust was accepted at all to begin with aside from C disproves the assumption, because clearly that kernel is open for "better" languages.
The _simplest_ explanation to why Rust has succeeded is that it's solves actual problems, not that "zealots" are lobbying for it to ensure they "have a job".
Rust is not stable even today! There is no spec, no alternative implementations, no test suite... "Stable" is what "current compiler compiles"! Existing code may stop compiling any day....
Maybe in 10 years it may become stable, like other "booring" languages (Golang and Java).
Rust stability is why Linus opposes its integration into kernel.
In the "other good news department", GCC is adding a Rust frontend to provide the alternative implementation, and I believe Rust guys accepted to write a specification for the language.
I'm waiting for gccrs to start using the language, actually.
I'm no Rust fan, but beauty of a syntax is always in the eye of the beholder.
I personally find Go, C++ and Python's syntax beautiful. All can be written in very explicit or expressive forms. On the other hand, you can hide complexity to a point.
If you are going to do complex things in a compact space, you'll asymptotically approach Perl or PCRE. It's maths.
For my part, I don't know why, but Zig's syntax feels wrong to me. I don't even know why. I really want to like its syntax, as Zig seems really promising to me, but I just don't, which makes it not very enjoyable for me to write.
I don't know if it's my lack of practice, but I never felt the same about, say, Rust's syntax, or the syntax of any other language for that matter.
> if other languages that address one of the main rust claims without all the cruft
But regardless of how much one likes Zig, it addresses none of the problems that Rust seeks to solve. It's not a replacement for Rust at all, and isn't suitable for any of the domains where Rust excels.
> and isn't suitable for any of the domains where Rust excels.
That's a pretty bold claim since Zig is specifically designed for systems programming, low level stuff, network services, databases (think Tigerbeetle). It's not memory safe like Rust is, but it comes with constructs that make it simple to build largely memory safe programs.
> It's not memory safe like Rust is, but it comes with constructs that make it simple to build largely memory safe programs.
Right, this is the specific important thing that Rust does that Zig doesn't (with the caveat that Rust includes the `unsafe` mechanism - as a marked, non-default option - specifically to allow for necessary low-level memory manipulation that can't be checked for correctness by the compiler). Being able to guarantee that something can't happen is more valuable than making it simple to do something correctly most of the time.
Sure but there's this belief in the Rust community that it's not responsible anymore to write software that isn't memory safe on the same level as Rust.
So Zig would fail that, but then you could also consider C++ unsuitable for production software - and we know it clearly is still suitable.
I predict Zig will just become more and more popular (and with better, although not as complete- memory safety), and be applied to mission critical infra.
If we ignore recent movents in govermental cybersecurity agencies, and big tech to move away from unsafe programming languages, as much as technically possible.
Introducing a language with the same safety as Modula-2 or Object Pascal, would make sense in the 1990's, nowadays with improved type systems making the transition from academia into mainstream, we (the industry) know better.
It is not only Rust, it is Linear Haskell, OCaml effects, Swift 6 ownership model, Ada/SPARK, Chapel,....
Of those listed, I'd bet Swift (having had experience with it) is the most pleasant to work with. I just hope it takes off on the systems and backend side at some point.
It's not that simple though, Zig has equivalent spatial memory safety which prevents issues that are pretty consistently among (or at) the top of the list for most dangerous vulnerability classes.
And while I don't have enough experience with Rust to claim this first hand, my understanding is that writing correct unsafe Rust code is at least an order of magnitude harder than writing correct Zig code due to all of the properties/invariants that you have to preserve. So it comes with serious drawbacks, it's not just a quick "opt out of the safety for a bit" switch.
> Being able to guarantee that something can't happen is more valuable than making it simple to do something correctly most of the time.
Of course, all other things being equal, but they're not.
> And while I don't have enough experience with Rust to claim this first hand, my understanding is that writing correct unsafe Rust code is at least an order of magnitude harder than writing correct Zig code due to all of the properties/invariants that you have to preserve.
How do you make such boldly dismissive assertions if you don't have enough experience with Rust? You are talking as if these invariants are some sort of requirements/constraints that the language imposes on the programmer. They're not. It's a well-known guideline/paradigm meant to contain any memory safety bugs within the unsafe blocks. Most of the invariants are specific to the problem at hand, and not to the programming language. They are conditions that must be met in any language - C and Zig are no exceptions. Failure to adhere to them will land you in trouble, no matter what sort of safety your language guarantees. They are often talked about in the context of Rust because the ones related to memory-unsafe operations can be tackled and managed within the small unsafe blocks, instead of being sprawling it throughout the code base.
> So it comes with serious drawbacks, it's not just a quick "opt out of the safety for a bit" switch.
Rust is not the ultimate solution to every problem in the world. But this sort of exaggeration and hyperbole is misleading and doesn't help anyone choose any better.
I would compare the recent Rust Android post [1], where they have a 5000x lower memory vulnerability rate compared to traditional C/C++ with the number of segfaults found in Bun. [2]
In my opinion Zig does not move the needle on real safety when the codebase becomes sufficiently complex.
Given the density of memory issues in the Bun issue tracker I have a hard time squaring the statement that Zig makes it "easy" to build memory safe programs.
Rust is not designed for low level system programming / embedded systems like Zig is. It is designed to make a browser and software that share requirements with making a browser.
There is some overlap but that's still different. The Zig approach to memory safety is to make everything explicit, it is good in a constrained environment typical of embedded programming. The Rust approach is the opposite, you don't really see what is happening, but there are mechanisms to keep your safe. It is good for complex software with lots of moving parts in an unconstrained environment, like a browser.
For a footgun analogy, one will hand you a gun that will never go off unless you aim and pull the trigger, so you can shoot your foot, but no sane person will. It is a good sniper rifle. The Rust gun can go off at any time, even when you don't expect it, but it is designed in such a way that it will never happen when it is pointed at your foot, even if you aim it there. It is a good machine gun.
Great C interop, first class support for cross-compilation, well suited for arena allocators.
You can use Rust in kernel/embedded code, you can also use C++ (I did) and even Java! but most prefer to use C, and I think that Zig is a better alternative to C for those in the field.
There is still one huge drawback with Zig and that's maturity. Zig is still in beta, and the closest you get to the metal, the more it tends to matter. Hardware projects typically have way longer life cycles and the general philosophy is "if it ain't broke, don't fix it". Rust is not as mature as C by far, there is a reason C is still king, but at least, it is out of beta and is seeing significant production use.
I remember when I talk about Zig to the CTO of the embedded branch of my company. His reaction was telling. "I am happy to hear someone mention Zig, it is a very interesting language and it is definitely on my watch list, but not mature enough to invest in it". He was happy that I mentioned Zig because in the company, the higher ups are all about Rust because of the hype, even though we do very little of if BTW, it is still mostly C and C++. And yeah, hype is important, customers heard about Rust as some magical tech that will make the code bug-free, they didn't hear about Zig, so Rust sells better. In the end, they go for C anyways.
That kind of is a bit load bearing. The differences are pretty huge. Plus, borrow checker is nowhere to be found. Cyclone is more C with a few tweaks (tagged unions, generics, regions, etc.).
Borrow checking is basically a synonym for affine type system.
The same outcome can be achieved via affine types, linear types, effects, dependent types, regions, proofs, among many other CS research in type systems.
Which is why following Rust's success, plenty of managed languages are now going through the evolution step to combine automatic resource management with improved type systems.
Taking the one that best approaches their current design.
There were languages with lifetimes and borrowing mechanics before Rust. Rust packages these mechanics in a nice way. Just like Zig encodes many niceties in a useful C language (comptime, simple cross-compilation, stdlib).
Which ones?? Before Rust, to my knowledge, no language had an actually practical way to use lifetimes and borrow-checking so that both memory safety and concurrency safety (data races, which is huge) were solved, even though the concepts were known in research. Doing the actual work to make it practical is what makes the difference between some obscure research topic and a widely used language that actually solves serious problems in the real world.
Yeah but is that a practical language people can use instead of C and Rust? I’ve always heard of it only as a research language that inspired rust but nothing else.
Outside AT&T until they ramped down the project, I guess not, Rust also took its time to actually take off beyond Mozilla, and is around because it was rescued by big tech (Amazon, Google, Microsoft,...) hiring most of the core team after Mozilla's layoffs.
> actually do change the way you think about software. It's a pretty momentous paradigm shift.
That's true for people who doesn't read and think about the code they write. For people who think from the perspective of a computer, Rust is "same checks, but forced by the compiler".
Make no mistake, to err is human, but Rust doesn't excite me that much.
> Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions
Funny. This was a great sell to me. I wonder why it isn’t the blurb. Maybe it isn’t a great sell to others.
The problem for me with so many of these languages is that they’re always eager to teach you how to write a loop when I couldn’t care less and would rather see the juice.
However, nowadays with comprehensive books like this, LLM tools can better produce good results for me as I try it out.
Very, very few people outside of foundational system software, HFT shops, and game studios understand why it's a great selling point. Everyone else likes the other points and don't realize the actual selling point of the language.
Graydon Hoare, a former C++ programmer on Mozilla Firefox and the original creator of Rust, acknowledges that for many people, Rust has become a viable alternative to C++ :
It's possible that Graydon's earliest private versions of Rust the 4 years prior to that pdf were an OCaml-inspired language but it's clear that once the team of C++ programmers at Mozilla started adding their influences, they wanted it to be a cleaner version of C++. That's also how the rest of the industry views it.
Alternative yes, derivative no. Rust doesn't approach C++'s metaprogramming features, and it probably shouldn't given how it seems to be used. It's slightly self-serving for browser devs to claim Rust solves all relevant problems in their domain and therefore eclipses C++, but to me in the scientific and financial space it's a better C, making tradeoffs I don't see as particularly relevant.
I say this as a past contributor to the Rust std lib.
> Rust and C++'s biggest mistake, not passing allocators into containers and functions
Rather, basing its entire personality around this philosophy is Zig's biggest mistake. If you want to pass around allocators in C++ or Rust, you can just go ahead and do that. But the reason people don't isn't because it's impossible in those languages, it's because the overwhelming majority of the time it's a lot of ceremony for no benefit.
Like, surely people see that in C itself there's nothing stopping anyone from passing around allocators, and yet almost nobody ever does. Ever wonder why that is?
Much of the book's copy appears to have been written by AI (despite the foreword statement that none of it was), which explains the hokey overenthusiasm and exaggerations.
As we know AI is at least as smart as the average human. It knows the Zeitgeist and thus adds “No AI used” in order to boost “credibility”. :) (“credibility” since AI is at least as smart the average human, for us in the know.)
For those who actually want to learn languages which are "fundamentally changing how you think about software", I'd recommend the Lisp family and APL family.
No need to include Elixir here; none of the important bits that will change how you view software come from Elixir, it's just a skin on top of Erlang (+ some standard library wrappers) and that's it.
I'd argue more people use Elixir over Erlang at this point. Sure its just an abstraction on top of Erlang, but people learn through Elixir nowadays, not through Erlang.
If you want to learn the actual mind changing aspects of the BEAM, clearly learning the simpler, smaller language with a more direct route to the juice is the way to go. Hence Erlang, not Elixir. I learned Elixir first back in 2015, and then learned Erlang, and have had the pleasure of using both in production. When all was said and done I really think Erlang was better, especially over a long enough time frame.
As a general point I'd like to state that I don't think it really matters what "people" do when you're learning for yourself. In the grand scheme of things approximately no one uses the BEAM, but this doesn't mean that learning how to use it is somehow pointless.
What is the most optimal Erlang/Elixir you can think of regarding standardized effect systems for recording non-determinism, replaying and reversible computing? How comparable are performance numbers of Erlang/Elixir with Java and wasm?
I'd recommend asking the Elixir community about this as I didn't even understand your question.
I am by no means a professional with Erlang/Elixir. I threw it out there because these language force you to think differently compared to common OOP languages.
Not even close. While Numpy has many similar operations, it lacks the terseness, concepts like trains and forks etc. Modern APL style doesn't use... control flow (neither recursion nor loops nor if...) and often avoids variables (tacit/point-free style).
Zig is so novel that it's hard to find any language like it. Its similarity to C is superficial. AFAIK, it is the first language ever to rely on partial evaluation so extensively. Of course, partial evaluation itself is not new at all, but neither were touchscreens when the iPhone came out. The point wasn't that it had a touchscreen, but that it had almost nothing but. The manner and extent of Zig's use of partial evaluation are unprecedented. I have nothing against OCaml, but it is a variant of ML, a 1970s language, that many undergrads were taught at university in the nineties.
I'm not saying everyone should like Zig, but its design is revolutionary:
I guess comptime is a little different but yeah I wouldn't say it fundamentally changes how you think about software.
I wouldn't say that about OCaml either really though. It's not wildly different in the way that e.g. Lean's type system, or Rust's borrow checker or Haskell's purity is.
This looks fantastic. Pedagogically it makes sense to me, and I love this approach of not just teaching a language, but a paradigm (in this case, low-level systems programming), in a single text.
Zig got me excited when I stumbled into it about a year ago, but life got busy and then the io changes came along and I thought about holding off until things settled down - it's still a very young language.
But reading the first couple of chapters has piqued my interest in a language and the people who are working with it in a way I've not run into since I encountered Ruby in ~2006 (before Rails hit v1.0), I just hope the quality stays this high all the way through.
So many comments about the AI generation part. Why does it matter? If it’s good and accurate and helpful why do you care? That’s like saying you used a calculator to calculate your equations so I can’t trust you.
I am just impressed by the quality and details and approach of it all.
Nicely done (PS: I know nothing about systems programming and I have been writing code for 25 years)
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
If the site would have said something like "We use AI to clean up our prose, but it was all audited thoroughly by a human after", I wouldn't have an issue. Even better if they shared their prompts.
Because AI gets things wrong, often, in ways that can be very difficult to catch. By their very nature LLMs write text that sounds plausible enough to bypass manual review (see https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...), so some find it best to avoid using it at all when writing documentation.
But all those "it's AI posts" are about the prose and "style", not the actual content. So even if (and that is a big if) the text was written using the help of AI (and there are many valid reasons to use it, e.g. if you're not a native speaker) that does not mean the content was written from AI and thus contains AI mistakes.
If it was so obviously written by AI then finding those mistakes should be easy?
The style is the easiest thing to catch for people; GP has said that the technical issues can be more difficult to find, especially in longer texts; there are times where it indeed are caught.
Passing even correct information through an LLM may or may not taint it; it may create sentences which on first glance are similar, but may have different, imprecise meaning - specific wording may be crucial in some cases. So if the style is under question, the content is as well. And if you can write the technically correct text at first, why would you put it through another step?
AI tools make different types of mistakes than humans, and that's a problem. We've spent eons creating systems to mitigate and correct human mistakes, which we don't have for the more subtle types of mistakes AI tends to make.
Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output.
But why would a serious person claim that they wrote this without AI when it's obvious they used it?!
Using any tool is fine, but someone bragging about not having used a tool they actually used should make you suspicious about the amount of care that went to their work.
That’s fine. Write it out yourself and then ask an AI how it could be improved with a diff. Now you’ve given it double human review (once in creation then again reviewing the diff) and single AI review.
That's one review with several steps and some AI assistance. Checking your work twice is not equivalent to it having it reviewed by two people, part of reviewing your work (or the work of others) is checking multiple times and taking advantage of whatever tools are at your disposal.
Because the first thing you see when you click the link is "Zero AI" pasted under the most obviously AI-generated copy I've ever seen. It's just an insult to our intelligence, obviously we're gonna call OP out on this. Why lie like that?
It's funny how everyone has gaslit themselves into doubting their own intuitions on the most blatant specimen where it's not just a mere whiff of the reek but an overpowering pungency assaulting the senses at every turn, forcing themselves to exclaim "the Emperor's fart smells wonderful!"
“The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”
It matters because, it irritates me to no end that I have to review AI generated content that a human did not verify before. I don't like being made to work in the guise of someone giving me free content.
> That’s like saying you used a calculator to calculate your equations so I can’t trust you.
A calculator exists solely for the realm of mathematics, where you can afford to more or less throw away the value of human input and overall craftsmanship.
That is not the case with something like this, which - while it leans in to engineering - is in effect viewed as a work of art by people who give a shit about the actual craft of writing software.
>That’s like saying you used a calculator to calculate your equations so I can’t trust you.
No it isn't. My TI-83 is deterministic and will give me exactly what I ask for, and will always do so, and when someone uses it they need to understand the math first or otherwise the calculator is useless.
These AI models on the other hand don't care about correctness, by design don't give you deterministic answers, and the person asking the question might as well be a monkey as far as their own understanding of the subject matter goes. These models are if anything an anti-calculator.
As Dijkstra points out in his fantastic essay on the idiocy of natural language "computation", what you are doing is exactly not computation but a kind of medieval incantation. Computers were designed to render impossible precisely the nonsense that LLMs produce. The biggest idiot on earth will still get a correct result from the calculator because unlike the LLM it is based on boolean logic, not verbal or pictorial garbage.
An awful lot of commenters are convinced that it's AI-generated, despite explicit statements to the contrary. Maybe they're wrong, maybe they're right, but none of them currently have any proof stronger than vibes. It's like everyone has gaslit themselves into thinking that humans can't write well-structured neutral-tone docs any more.
This is not written in a neutral-tone at all! There is a lot of bland marketing speech that feels completely out of place. This is not how you write good technical literature.
Many people have already shown the hallucinated apis that is much stronger evidence than your "vibes".
I suppose the author may have deliberately added the "No AI assistance" notice - making sure all the hallucinated bugs are found via outraged developers raising tickets. Without that people may not even have bothered.
I value human work and I do NOT value work that has been done with heavy AI usage.
Most AI things I've seen are slop, I instantly recognize AI songs for example. I just dont want anything to do with it. The uniqueness of creative work is lost with using AI.
I agree, I love zig but the things that make me program differently are features like excellent enum/union support, defer and comptime, which aren't readily available in the other languages I tend to use (C++, Fortran and Python).
It's pretty incredible how much ground this covers! However, the ordering feels a little confusing to me.
One example is in chapter 1. It talks about symbol exporting based on platform type, without explaining ELF. This is before talking about while loops.
It's had some interesting nuggets so far, and I've followed along since I'm familiar with some of the broad strokes, but I can see it being confusing to someone new to systems programming.
It looks cool! No experience with Zig so can't comment on the accuracy, but I will take a look at it this week. Also a bit annoying that there is no PDF version that I could download as the website is pretty slow. After taking a look at the repository (https://github.com/zigbook/zigbook/tree/main), each page seems to be written in AsciiDoc, so I'll take a look about compiling a PDF version later today.
HOWTO: The text can be found per-chapter in `./pages/{chapter}.adoc` but each chapter includes code snippets found in a respective `./chapters-data/code/{chapter}/` subdirectory. So, perhaps a hacky way to do it but quite lazy to fully figure asciidoctor flags, created using a script a combined book.adoc that includes all others with `include::{chapter}.adoc` directives, then run `asciidoctor-pdf -a sourcedir=../chapters-data/code -r asciidoctor-diagram -o book.pdf ./pages/book.adoc`.
Welp. I wish I had read the comments first to discover that this is AI generated. On the other hand, I got to experience the content without bias.
I opted to give it a try instead of reading the comments and the book was arranged in a super strange way where it's discussing concepts that a majority of programmers would never be concerned with when starting out with learning a language. It's very different to learn about some of these concepts if you are reading a language doc in order to work on the language itself. But if you want to learn how to use the language, something like:
> Choose between std.debug.print, unbuffered writers, and buffered stdout depending on the output channel and performance needs.
is absolutely never going to be something you dump into chapter 1. I skimmed through a few chapters from there and it's blocks of stuff thrown in randomly. The introduction to the if conditional throws in Zig Intermediate Representation with absolutely no explanation of what it is and why it's even being discussed.
Came here to comment that this has been written pretty poorly or just targets a very niche audience and now I discover it's slop. What a waste of time. The one thing AI was supposed to save.
Very well done! wow! Thanks for this. Going through this now.
One comment: About the syntax highlighting, the dark blue for keywords against a black background is very difficult to read. And if you opt for the white background, the text becauses off white / grey which again is very difficult to read.
"The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices."
Such a bald-faced and utterly disgusting lie. The introduction itself ticks every single flag of AI generated slop. AI is trained well on corporate marketing brochures.
Hmm, the explanation of Allocators is much more detailed in the book, but I feel although more compact, it seems much more reasonable in the language reference. [0]
I'll keep exploring this book though, it does look very impressive.
It's really hard to believe this isn't AI generated, but today I was trying to use the HTTP server from std after the 0.15 changes, couldn't figure out how it's supposed to work until I've searched repos in Github. LLM's couldn't figure it out as well, they were stuck in a loop of changing/breaking things even further until they arrived at the solution of using the deprecated way. so I guess this is actually handwritten which is amazing because it looks like the best resource I've seen up until now for Zig
it's not only the size - it was pushed all at once, anonymously, using text that highly resembles that of an AI. I still think that some of the text is AI generated. perhaps not the code, but the wording of the text just reeks of AI
For some of my projects I develop against my own private git server, then when I'm ready to go public, create a new git repo with a fully squashed history. My early commits are basically all `git commit -m "added stuff"`
It's almost as though the LLMs were trained on all the writing conventions which are used by humans and are parroting those, instead of generating novel outputs themselves.
They haven’t picked up any one human writing style, they’ve converged on a weird amalgamation of expressions and styles that taken together don’t resemble any real humans writing and begin to feel quite unnatural.
As someone who uses em-dashes a lot, I’m getting pretty tired of hearing something “screams AI” about extremely simple (and common) human constructs. Yeah, the author does use that convention a number of times. But that makes sense, if that’s a tool in your writing toolbox, you’ll pull it out pretty frequently. It’s not signal by itself, it’s noise. (does that make me an AI!?) We really need to be considering a lot more than that.
Reading through the first article, it appears to be compelling writing and a pretty high quality presentation. That’s all that matters, tbh. People get upset about AI slop because it’s utterly worthless and exceptionally low quality.
The repetitiveness of the shell commands (and using zig build-exe instead of zig run when the samples consist of short snippets), the filler bullet points and section organization that fail to convey any actual conceptual structure.
And ultimately throughout the book the general style of thought processes lacks any of the zig community’s cultural anachronisms.
If you take a look at the repository you’ll also notice baffling tech choices not justified by the author that runs counter against the zig ethos.
(Edit: the build system chapter is an even worse offender in meaningless cognitively-cluttering headings and flowcharts, it’s almost certainly entirely hallucinated, there is just an absurd degree of unziglikeness everywhere: https://www.zigbook.net/chapters/26__build-system-advanced-t... -- What’s with the completely irrelevant flowchart of building the zig compliler? What even is the point of module-graph.txt? And icing on the cake in the “Vendoring vs Registry Dependencies” section.)
Yeah and then why would they explicitly deny it? Maybe the AI was instructed not to reveal its origin. It's painful to enjoy this book if I know it's likely made by an LLM.
I've had the same experience as you with Zig. I quite love the idea of it Zig but the undocumented churn is a bit much. I wish they had auto generated docs that reflect the current state of the stdlib, at least. Even if it just listed the signatures with no commentary.
I was trying to solve a simple problem but Google, the official docs, and LLMs were all out of date. I eventually found what I needed in Zig's commit history, where they casually renamed something without updating the docs. It's been renamed once more apparently, still not reflected in the docs :shrugs:.
But you can tell your LLM to just go look at the source code (after checking it out so it doesn’t try 20s github requests). Always works like a charm for me.
C++ is far better than C in very many ways. It's also far worse than C in very many other ways. Given a choice between the two, I'd still choose C++ every day just for RAII. There's only so much that we can blame programmers for memory leaks, use-after-free, buffer overflows, and other things that are still common in new C code. At some point, it is the language itself that is unsuitable and insufficient.
Early talks by Andrew explicitly leaned into the notion that "software can be perfect", which is a deviation from how most programmers view software development.
Zig also encourages you to "think like a computer" (also an explicit goal stated by Andrew) even more than C does on modern machines, given things like real vectors instead of relying on auto vectorization, the lack of a standard global allocator, and the lack of implicit buffering on standard io functions.
I would definitely put Zig on the list of languages that made me think about programming differently.
I think it mostly comes down to the standard library guiding you down this path explicitly. The C stdlib is quite outdated and is full of bad design that affects both performance and ergonomics. It certainly doesn't guide you down the path of smart design.
Zig _the language_ barely does any of the heavy lifting on this front. The allocator and io stories are both just stdlib interfaces. Really the language just exists to facilitate the great toolchain and stdlib. From my experience the stdlib seems to make all the right choices, and the only time it doesn't is when the API was quickly created to get things working, but hasn't been revisited since.
A great case study of the stdlib being almost perfect is SinglyLinkedList [1]. Many other languages implement it as a container, but Zig has opted to implement it as an intrusively embedded element. This might confuse a beginner who would expect SinglyLinkedList(T) instead, but it has implications surrounding allocation and it turns out that embedding it gives you a more powerful API. And of course all operations are defined with performance in mind. prepend is given to you since it's cheap, but if you want postpend you have to implement it yourself (it's a one liner, but clearly more expensive to the reader).
Little decisions add up to make the language feel great to use and genuinely impressive for learning new things.
A nitpick about website: the top progress bar is kind of distracting (high-constrast color with animation). It's also unnecessary because there is already scrollbar on the right side.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I just don't buy it. I'm 99% sure this is written by an LLM.
Can the author... Convince me otherwise?
> This journey begins with simplicity—the kind you encounter on the first day. By the end, you will discover a different kind of simplicity: the kind you earn by climbing through complexity and emerging with complete understanding on the other side.
> Welcome to the Zigbook. Your transformation starts now.
...
> You will know where every byte lives in memory, when the compiler executes your code, and what machine instructions your abstractions compile to. No hidden allocations. No mystery overhead. No surprises.
...
> This is not about memorizing syntax. This is about earning mastery.
Pretty clear it's all AI. The @zigbook account only has 1 activity prior to publishing this repo, and that's an issue where they mention "ai has made me too lazy": https://github.com/microsoft/vscode/issues/272725
After reading the first five chapters, I'm leaning this way. Not because of a specific phrase, but because the pacing is way off. It's really strange to start with symbol exporting, then moving to while loops, then moving to slices. It just feels like a strange order. The "how it works" and "key insights" also feel like a GPT summarization. Maybe that's just a writing tic, but the combination of correct grammar with bad pacing isn't something I feel like a human writer has. Either you have neither (due to lack of practice), or both (because when you do a lot of writing you also pick up at least some ability to pace). Could be wrong though.
It's just an odd claim to make when it feels very much like AI generated content + publish the text anonymously. It's obviously possible to write like this without AI, but I can't remember reading something like this that wasn't written by AI.
It doesn't take away from the fact that someone used a bunch of time and effort on this project.
To be clear, I did not dismiss the project or question its value - simply questioned this claim as my experience tells me otherwise and they make a big deal out of it being human written and "No AI" in multiple places.
I was pretty skeptical too, but it looks legit to me. I've been doing Zig off and on for several years, and have read through the things I feel like I have a good understanding of (though I'm not working on the compiler, contributing to the language, etc.) and they are explained correctly in a logical/thoughtful way. I also work with LLMs a ton at work, and you'd have to spoon-feed the model to get outputs this cohesive.
Keep in mind that pangram flags many hand-written things as AI.
> I just ran excerpts from two unpublished science fiction / speculative fiction short stories through it. Both came back as ai with 99.9% confidence. Both stories were written in 2013.
> I've been doing some extensive testing in the last 24 hours and I can confidently say that I believe the 1 in 10,000 rate is bullshit. I've been an author for over a decade and have dozens of books at hand that I can throw at this from years prior to AI even existing in anywhere close to its current capacity. Most of the time, that content is detected as AI-created, even when it's not.
> Pangram is saying EVERYTHING I have hand written for school is AI. I've had to rewrite my paper four times already and it still says 99.9% AI even though I didn't even use AI for the research.
> I've written an overview of a project plan based on a brief and, after reading an article on AI detection, I thought it would be interesting to run it through AI detection sites to see where my writing winds up. All of them, with the exception of Pangram, flagged the writing as 100% written by a human. Pangram has "99% confidence" of it being written by AI.
I generally don't give startups my contact info, but if folks don't mind doing so, I recommend running pangram on some of their polished hand written stuff.
How long were the extracts you gave to Pangram? Pangram only has the stated very high accuracy for long-form text covering at least a handful of paragraphs. When I ran this book, I used an entire chapter.
Doesn't mean that the author might not use AI to optimise legibility. You can write stuff yourself and use an LLM to enhance the reading flow. Especially for non-native speakers it is immensely helpful to do so. Doesn't mean that the content is "AI-generated". The essence is still written by a human.
>If an LLM was used in any fashion, then this statement is simply a lie.
While I don't believe the article was created this way, it's possible to use an LLM purely as a classifier. E.g. prompt along the lines of "Does this paragraph contain any errors? Answer only yes or no." and generate only a single set of token probabilities, without any autoregression. Flag any paragraphs with sufficient probability of "yes" for human review.
Clarity in writing comes mostly from the logical structure of ideas presented. Writing can have grammar/style errors but still be clear. If the structure is bad after translation, then it was bad before translation too.
I'm not sure, but I try my best to assume good faith / be optimistic.
This one hit a sore spot b/c many people are putting time and effort into writing things themselves and to claim "no ai use" if it is untrue is not fair.
If the author had a good explanation... Idk not a native English writer and used an LLM to translate and that included the "no LLMs used" call-out and that was translated improperly etc
To me it's another specimen in the "demonstrating personhood" problem that predates LLMs. e.g. Someone replies to you on HN or twitter or wherever, are they a real person worth engaging with? Sometimes it'll literally be a person but their behavior is indistinguishable from a bot, that's their problem. Convincing signs of life include account age, past writing samples, and topic diversity.
You can't just say that a linguistic style "proves" or even "suggests" AI. Remember, AI is just spitting out things its seen before elsewhere. There's plenty of other texts I've seen with this sort of writing style, written long before AI was around.
Can I also ask: so what if it is or it isn't?
While AI slop is infuriating, and the bubble hype is maddening, I'm not sure every time somebody sees some content they don't like the style of we just call out it "must" be AI, and debate if it is or it isn't is not at least as maddening. It feels like all content published now gets debated like this, and I'm definitely not enjoying it.
You can be skeptical of anything but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest that it's generated text.
As to why it matters, doesn't it matter when people lie? Aren't you worried about the veracity of the text if it's not only generated but was presented otherwise? That wouldn't erode your trust that the author reviewed the text and corrected any hallucinations even by an iota?
I don't think there was very much abuse of "not just A, but B" before ChatGPT. I think that's more of a product of RLHF than the initial training. Very few people wrote with the incredibly overwrought and flowery style of AI, and the English speaking Internet where most of the (English language) training data was sourced from is largely casual, everyday language. I imagine other language communities on the Internet are similar but I wouldn't know.
Don't we all remember 5 years ago? Did you regularly encounter people who write like every followup question was absolutely brilliant and every document was life changing?
I think about why's (poignant) Guide to Ruby [1], a book explicitly about how learning to program is a beautiful experience. And the language is still pedestrian compared to the language in this book. Because most people find writing like that saccharin, and so don't write that way. Even when they're writing poetically.
Regardless, some people born in England can speak French with a French accent. If someone speaks French to you with a French accent, where are you going to guess they were born?
Even if that were comparable in size to the conversational Internet, how many novels and academic papers have you read that used multiple "not just A, but B" constructions in a single chapter/paper (that were not written by/about AI)?
IMO HN should add a guideline about not insinuating things were written by AI. It degrades the quality of the site similarly to many of the existing rules.
Arguably it would be covered by some of the existing rules, but it's become such a common occurrence that it may need singling out.
What degrades conversation is to lie about something being not AI when it actually is. People pointing out the fraud are right to do so.
One thing I've learned is that comment sections are a vital defense on AI content spreading, because while you might fool some people, it's hard to fool all the people. There have been times I've been fooled by AI only to see in the comments the consensus that it is AI. So now it's my standard practice to check comments to see what others are saying.
If mods put a rule into place that muzzles this community when it comes to alerting others a fraud is being affected, that just makes this place a target for AI scams.
It's 2025, people are going to use technology and its use will spread.
There are intentional communities devoted to stopping the spread of technology, but HN isn't currently one of them. And I've never seen an HN discussion where curiosity was promoted by accusations or insinuations of LLM use.
It seems consistent to me with the rules against low effort snark, sarcasm, insinuating shilling, and ideological battles. I don't personally have a problem with people waging ideological battles about AI, but it does seem contrary to the spirit of the site for so many technical discussions to be derailed so consistently in ways that specifically try to silence a form of expression.
I'm 100% okay with AI spreading. I use it every day. This isn't a matter of an ideological battle against AI, it's a matter of fraudulent misrepresentation. This wouldn't be a discussion if the author themselves hadn't claimed what they had, so I don't see why the community should be barred from calling that out. Why bother having curious discussions about this book when they are blatantly lying about what is presented here? Here's some curiosity: what else are they lying about, and why are they lying about this?
To clarify there is no evidence of any lying or fraud. So far all we have evidence of is HN commenters assuming bad faith and engaging in linguistic phrenology.
There is evidence, it's circumstantial, but there's never going to be 100% proof. And that's the point, that's why community detection is the best weapon we have against such efforts.
(Nitpick: it's actually direct evidence, not circumstantial evidence. I think you mean it isn't conclusive evidence. Circumstantial evidence is evidence that requires an additional inference, like the accused being placed at the scene of the crime implying they may have been the perpetrator. But stylometry doesn't require any additional inference, it's just not foolproof.)
I wouldn't mind a technical person transparently using AI for doing the writing which isn't necessary their strength, as long as the content itself comes from the author's expertise and the generated writing is thoroughly vetted to make sure there's no hallucinationated misunderstanding in the final text. At the end of the day this would just increase the amount of high quality technical content available, because the set of people with both a good writing skill and a deep technical expertise is much narrower than just the later.
But claiming you didn't use AI when you did breaks all trust between you a your readership and makes the end result pretty much worthless because why read a book if you don't trust the author not to waste your time?
So petty as to lie about using AI or so petty as to call it out? Calling it out doesn't seem petty to me.
I intend to learn Zig when it reaches 1.0 so I was interested in this book. Now that I see it was probably generated by someone who claimed otherwise, I suspect this book would have as much of a chance of hurting my understanding as helping it. So I'll skip it. Does that really sound petty?
I understand being okay with a book being generated (some of the text I published in this manual [1] is generated), I can imagine not caring that the author lied about their use of AI, but I really don't understand the suggestion I write a book about a subject I just told you I'm clueless about. I feel like there's some kind of epistemic nihilism here that I can't fathom. Or maybe you meant it as a barb and it's not that deep? You tell me I guess.
I'm also concerned whether it is useful! That's why I'm not gunnuh read it after receiving a strong contrary indicator (which was less the use of AI than the dishonesty around it). That's also why I try to avoid sounding off on topics I'm not educated in (which is too say, why I'm not writing a book about Zig).
Remember - I am using AI and publishing the results. I just linked you to them!
So you could do everyone a favour by giving a sufficiently detailed review, possibly with recommendations to the author how to improve the book. Definitely more useful than speculating about the author's integrity.
I'm satisfied with what's been presented here already, and as someone who doesn't know Zig it would take me several weeks (since I would have to learn it first), so that seems like an unreasonable imposition on my time. But feel free to provide one yourself.
Well, there must have been a good reason why you don't like the book. I didn't see good reasons in this whole discussion so far, just a lot of pedantry. No commenter points to technical errors, inaccuracies, poor code examples, or pedagogical problems. The entire objection rests on subjective style preferences and aesthetic nitpicking rather than legitimate quality concerns.
I don't see what else I can say to help you understand. I think we just have very different values and world views and find one another's perspective baffling. Perhaps your preferred AI assistant, if directed to this conversation, could put it in clearer terms than I am able to.
My statement refers to this claim: "I'm 99% sure this is written by an LLM."
The hypocrisy and entitlement mentality that prevails in this discussion is disgusting. My recommendation to the fellow below that he should write a book himself (instead of complaining) was even flagged, demonstrating once again the abuse of this feature to suppress other, completely legitimate opinions.
I'm guessing it was flagged because it came off as snark. I've gone ahead and vouched it but of course I can't guarantee it won't get flagged again. To be frank this comment is probably also going to get flagged for the strong language you're using. I don't think either are abusive uses of flagging.
Additionally please note that I neither complained not expressed an entitlement. The author owes me as much as I owe them (nothing beyond respect and courtesy). I'm just as entitled to express a criticism as they are to publish a book. I suppose you could characterize my criticism as complaints, but I don't see what purpose that really serves other than to turn up the rhetorical temperature.
I don't know any of you. But Zig has opened way big door for system programming for people like me who has never done that before. And, Zig code looks (for a guy comes from curly braces language) easier to understand with really small learning curve.
The book content itself is deliberately free of AI-generated prose. Drafts may start anywhere, but final text should be reviewed, edited, and owned by a human contributor.
There is more specificity around AI use in the project README. There may have been LLMs used during drafting, which has led to the "hallmarks" sticking around that some commenters are pointing out.
That statement is honestly self-contradictory. If a draft was AI-generated and then reviewed, edited, and owned by a human contributor, then the parts which survived reviewing and editing verbatim were still AI-generated...
Why do you care, if a human reviewed and edited it, someone filtered it to make sure it’s correct. It’s validated to be correct, that is the main point.
People have the illusion of reviewing and "owning" the final product, but that is not how it looks like from the outside. The quality, the prose style, the errors that pass through due to inevitable AI-induced complacency ALWAYS EVENTUALLY show. If people got out of the AI bubbles they would see it too, alas.
We keep reading the same stories for at least a couple of years now. There is no novelty anymore. The core issues and problems have stayed the same since gpt3.5. And because they are so omnipresent in the internet, we have grown to be able to recognise them almost automatically. It is no longer just a matter of quality, it is an insult to the readers when an author pretends that content is not AI generated just because they "reviewed it". Reviewing sth that somebody else wrote is not ownership, esp when that sth is an LLM.
In any case, I do not care if people want to read or write AI generated books, just don't lie about it being AI generated.
This source is really hard to trust. AI or not, the author has done no work to really establish epistemological reliability and transparency. The entire book was published at once with no history, no evidence of the improvement and iteration it takes to create quality work, and no reference as to the creative process or collaborators or anything. And on top of that, the author does not seem to really have any other presence or history in the community. I love Zig, and have wanted more quality learning materials to exist. This, unfortunately, does not seem to be it.
For books that are published in more traditional manners, digital or paper, there is normally a credible publisher, editors, sometimes a foreword from a known figure, reviews from critics or experts in the field, and often a bio about the author explaining who they are and why they wrote the book etc. These different elements are all signals of reliability, they help to convey that the content is more than just fluff around an attention-grabbing title, that it has depth and quality and holds up. The whole publishing business has put massive effort into establishing and building these markers of trust.
The book claims it’s not written with the help of AI, but the content seems so blatantly AI-generated that I’m not sure what to conclude, unless the author is the guy OpenAI trained GPT-5 on:
> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
“Not just X - Y” constructions.
> By Chapter 61, you will not just know Zig; you will understand it deeply enough to teach others, contribute to the ecosystem, and build systems that reflect your complete mastery.
More not just X - Y constructions with parallelism.
Even the “not made with AI” banner seems AI generated! Note the 3 item parallelism.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I don’t have anything against AI generated content. I’m just confused what’s going on here!
EDIT: after scanning the contents of the book itself I don’t believe it’s AI generated - perhaps it’s just the intro?
EDIT again: no, I’ve swung back to the camp of mostly AI generated. I would believe it if you told me the author wrote it by hand and then used AI to trim the style, but “no AI” seems hard to believe. The flow charts in particular stand out like a sore thumb - they just don’t have the kind of content a human would put in flow charts.
Every time I read things like this, it makes me think that AI was trained off of me. Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are all examples of how they teach to write essays in high school and college. They're also all examples of how I think and have learned to communicate.
To be explicit, it’s not general hallmarks of good writing. It’s exactly two common constructions: not X but Y, and 3 items in parallel. These two pop up in extreme disproportion to normal “good writing”. Good writers know to save these tricks for when they really want to make a point.
Most people aren’t great writers, though (including myself). I’d guess that if people find the “not X but Y” compelling, they’ll overuse it. Overusing some stylistic element is such a normal writing “mistake”. Unless they’re an extremely good writer with lots of tools in their toolbox. But that’s not most people.
I find the probability that a particular writer latches onto the exact same patterns that AI latches onto, and does not latch onto any of the patterns AI does not latch onto, to be quite low. Is it a 100% smoking gun? No. But it’s suspicious.
But you didn't write that "Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are not just examples of how they teach to write essays in high school and college; they're also all examples of how I think and have learned to communicate."
I mean maybe the content is not AI generated (I wouldn’t say it is) but the website does have an AI generated smell to it. From the colors to the shapes, it looks like Sonnet or Opus definitely made some tweaks.
Clearly your perception of what is AI generated is wrong. You can't tell something is AI generated only because it uses "not just X - Y" constructions. I mean, the reason AI text often uses it is because it's common in the training material. So of course you're going to see it everywhere.
Find me some text from pre-AI that uses so many of these constructions in such close proximity if it’s really so easy - I don’t think you’ll have much luck. Good authors have many tactics in their rhetorical bag of tricks. They don’t just keep using the same one over and over.
The style of marketing material was becoming SO heavily cargo-culted with telltale signs exactly like these in the leadup to LLMs.
Humans were learning the same patterns off each other. Such style advice has been floating around on e.g. LinkedIn for a while now. Just a couple years later, humans are (predictably) still doing it, even if the LLMs are now too.
We should be giving each other a bit of break. I'd personally be offended if someone thought I was a clanker.
You’re completely right, but blogs on the internet are almost entirely not written by great authors. So that’s of no use when checking if something is AI generated.
Yeah, you should. Zig is a trending language right now, and in the coming years many projects are likely to be rewritten in Zig instead of Rust (often referred to as "riiz").
I don't think you need to learn anything! Especially if you like Rust and it works for your projects.
Not an expert but Zig seems like a modern C - you manage memory yourself. I guess if you want more modern features than C offers, and actively don't want the type-system sort of features that Zig has (or are grumpy about compile times, etc) then it's there for you to try!
Partially agree on this, std lib/crates and ease of use do make a difference (this is not even the main reason to use Rust), though Rust certainly has its own headaches. (Imagine searching for someone's implementation of HashedMap on github or using dedicated packages like glib, when you get it easily at crates.io). Again this is subjective based on use cases.
For me, personally, any new language needs to have a "why." If a new language can't convince me in 1-2 sentences why I need to learn it and how it's going to improve software development, as a whole, it's 99% bs and not worth my time.
DHH does a great job of clarifying this during his podcast with Lex Friedman. The "why" is immediately clear and one can decide for themselves if it's what they're looking for. I have not yet seen a "why" for Zig.
For many languages I agree, especially languages with steep learning curves (e.g. Rust, Haskell). But zig is dead fast to learn so I'd recommend just nipping through Ziglings and seeing if its a language you want to add to the toolbox. It took me only about 10 hours to pick up and get used to and it has immediately replaced C and C++ in my personal projects. It's really just a safer, more ergonomic C. If you already love C, I maybe wouldn't bother.
I'm a C/C++ developer. I write production code in MQL5 (C-like) and Go, and I use Python for research and Automation. I can work with other languages as well, but I keep asking myself: why should I learn Zig?
If I want to do system or network programming, my current stack already covers those needs — and adding Rust would probably make it even more future-proof. But Zig? This is a genuine question, because the "Zig book" doesn’t give me much insight into what are the real use cases for Zig.
If you're doing it for real-world values, keep doing that. But if you want traction, writing in a "fancy" language is almost a requirement. "A database engine written in Zig" or "A search engine written in Zig" sounds much flashier and guarantees attention. Look at this book: it is defintely an AI slop, but it stays at the top spot, and there's barely any discussion about the language itself.
Enough rant, now back on some reasons for why choosing Zig:
- Cross platform tools with tiny binaries (Zig's built in cross compilation avoids the complex setup needed with C)
- System utilities or daemons (explicit error handling instead of silent patterns common in C)
- Embedded or bare metal work (predictable rules and fewer footguns than raw C)
- Interfacing with existing C libraries (direct header import without manual binding code)
- Build and deployment tooling (single build system that replaces Make and extra scripts)
For my personal usage, I'm working on replacing Docker builds for some Go projects that rely heavily on CGO by using `zig cc`. I'm not using the Zig language itself, but this could be considered one of its use cases.
> For my personal usage, I'm working on replacing Docker builds for some Go projects that rely heavily on CGO by using `zig cc`. I'm not using the Zig language itself, but this could be considered one of its use cases.
Hm, i can see a good use case when we want to have reproducible builds from go packages, including its C extensions. Is that your use case, or are you aiming for multi-environment support of your compiled "CGO extensions"
My take on this as someone that professionally coded in C, C++, Go, Rust, Python (and former darlings of the past) is that Zig gives you the sort of control that C does with enough niceties as to not break into other idioms like C++ and Rust does in terms of complexity.
Rust "breaks" on some low level stuff when you need to deal with unsafe (another idiom) or when you need to rely on proc-macros to have a component system like Bevy does. Nothing wrong with this, is just that is hard to cover all the ground.
The same happens with C++, having to grow to adapt to cover a lot of ground it ended up with lots of features and also with some complexity burden.
In my experience with Zig, you have the feeling of thinking more about systems engineering using the language to help you implement that without resorting to all sort of language idioms and complexity. It feels more intuitive in way giving it tries to stay simple and get out of your way. Its a more "unsurprising" programming language in terms of what you end up getting after you code into it, in terms of understanding exactly how the code will run.
In terms of ecosystem, lets say you have Java lunch, C lunch and C++ lunch (established languages) in their domains. Go is eating some Java(C#, etc..) lunch and in smaller domains some C++ lunch. Rust is in the same heavy weight category as Go, but it can eat more C++ lunch than Go ever could.
Now Zig will be able to compete in ways that it can really be an alternative to C core values, which other programming languages failed to achieve. So it will be aimed at things C and C++ are doing now and where Go and Rust wont be good candidates.
If you used Rust long enough you can see that while it can cover almost all ground its not a good fit for lower level stuff or at least not without some compromises either in performance or complexity (affecting productivity). So its more in the same family as C++ in terms of what you pay for (again nothing wrong with that, is just that some complex codebases will need a good amount of man-hours effort in the same line as C++ does).
Don't get me wrong, Rust can be good at low level stuff too, is just that some of its choices make you as a developer pay a price for those niceties when you need to get your hands dirty in specific domains.
With Zig you fell more focused on the machine with less abstractions as in C but with enough goodies that can make even the most die-hard C developer think about using it (something C++ and Rust never managed to do it).
So i think Zig will have its place in the sun as Rust does. But I see Rust taking more the place where Java used to be (together with Go) + some things that were made in C++ where Zig will be more focused on system and low level stuff.
Modern C++ will still be around, but Rust and Zig will used more and more where languages like C and C++ used to be the only real contenders, which is quite good in my POV.
What will happen is that Rust and Zig programmers might overlap and offer tools in the same area (see Bun and Deno for instance) but the tools will excel on their own way and with time it will be more clear into which domain Rust and Zig are better at.
It was very hard to find a link to the table of contents… then I tried opening it and the link didn’t work. I’m on iOS. I’d have loved to take a look quickly what’s in the book…
Haha the fucking garbage. Before AI, before the internet, this overexaggerated, hokey prose was written by scummy humans and it came exclusively in porn magazines along with the x-ray specs and sea-monkey fishtanks.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I think it's time to have a badge for non LLM content, and avoid the rest.
I imagine it's kind of like "What's stopping someone from forging your signature on almost any document?" The point is less that it's hard to fake, and more that it's a line you're crossing where everyone agrees you can't say "oops I didn't know I wasn't supposed to do that."
The name seems odd to me, because I think it's fine to describe things as a digital brain, especially when the word brain doesn't only apply to humans but to organisms as simple as a 959 cell roundworm with 302 neurons.
Even for content that isn’t directly composed by llm, I bet there’d be value in an alerting system that could ingest your docs and code+commits and flag places where behaviour referenced by docs has changed and may need to be updated.
This kind of “workflow” llm use has the potential to deliver a lot of value even to a scenario where the final product is human-composed.
> Most programming languages hide complexity from you—they abstract away memory management, mask control flow with implicit operations, and shield you from the machine beneath. This feels simple at first, but eventually you hit a wall. You need to understand why something is slow, where a crash happened, or how to squeeze every ounce of performance from your hardware. Suddenly, the abstractions that helped you get started are now in your way.
> Zig takes a different path. It reveals complexity—and then gives you the tools to master it.
> This book will take you from Hello, world! to building systems that cross-compile to any platform, manage memory with surgical precision, and generate code at compile time. You will learn not just how Zig works, but why it works the way it does. Every allocation will be explicit. Every control path will be visible. Every abstraction will be precise, not vague.
But sadly people like the prompter of this book will lie and pretend to have written things themselves that they did not. First three paragraphs by the way, and a bingo for every sign of AI.
I had a discussion on some other submission a couple of weeks back, where several people were arguing "it's obviously AI generated" (the style btw was completely different to this, quite a few explicitives...). When I put the the text in 5 random AI detectors the argument who except for one (which said mixed, 10% AI or so) all said 100% human I was being down voted and the argument became "AI detection tools can detect AI" but somehow the people claim there are 100% clear telltale signs which says it's AI (why those detection tools can detect them is baffling to me).
I have the feeling that the whole "it's AI" stick has become a synonym for I don't like this writing style.
It really does not add to the discussion. If people would post immediately "there's spelling mistakes this is rubbish", they would rightfully get down voted, but somehow saying "it's AI" is acceptable. Would the book be any more or less useful if somebody used AI for writing it? So what is your point?
Check out the other examples presented in this thread or read some of the chapters. I'm pretty sure the author used LLMs to generate at least parts of this text. In this case this would be particularly outrageous since the author explicitly advertizes the content as 100% handwritten.
> Would the book be any more or less useful if somebody used AI for writing it?
Personally, I don't want to read AI generated texts. I would appreciate if people were upfront about their LLM usage. At the very least they shouldn't lie about it.
I ran the introduction chapter through Pangram [1], which is one of the most reliable AI-generated text classifiers out there [2] (with a benchmarked accuracy of 99.85% over long-form text), and it gives high confidence for it having been AI-generated. It's also very intuitively obvious if you play a lot with LLMs.
I have no problem at all reading AI-generated content if it's good, but I don't appreciate dishonesty.
There's also the classic “it's not just X, it's Y”, adjective overuse, rule of 3, total nonsense (manage memory with surgical precision? what does that mean?), etc. One of these is excusable, but text entirely comprised of AI indicators is either deliberately written to mimic AI style, or the product of AI.
"not just x but y" is definitely a tell tale AI marker. But, people can write that as well. Also our writing styles can be influenced as we've seen so much AI content.
Anyway, if someone says they didn't use AI, I would personally give them the benefit of the doubt for a while at least.
Like many scholarly linguistic construction, this is one many of us saw in latin class with non solum ... sed etium or non modo ... sed etium: https://issuu.com/uteplib/docs/latin_grammar/234. I didn't take ancient Greek, but I wouldn't be surprised if there's also a version there.
Meh. I mean, who's it for? People should be adopting the stance that everything is AI on the internet and make decisions from there. If you start trusting people telling you that they're not using AI, you're setting yourself up to be conned.
Edit: So I wrote this before I read the rest of the thread where everyone is pointing out this is indeed probably AI, so right of the bat the "AI-free" label is conning people.
I guess now the trend is Zig. The era of Javascript framework has come to end. After that was AI tend. And now we have Zig and its allocators, especially the arena allocator.
The page you've linked is very confusing, but as far as I can tell that's a Zigbee device that the manufacturer (Tensor plc) consistently describes as a "Zig" device. I have no idea why, it's bizarre.
- This thesis [1] identifies a product in this family as a Zigbee device. It's on the 80th page (numbered 62). Elsewhere it's referred to as a Zig device.
- I can't find anyone else claiming to make Zig devices or any references to a Zig protocol outside of this one manufacturer and their distributors.
- The manufacturer makes a lot of weird typos. They variously say these devices operate at 2.4GHz, 2.4MHz, and 2.4Mhz.
- There's nothing about a Zig protocol on the Zigbee Wikipedia page.
Even if what you say is true, people make bets on new tech all the time. You show up early so you can capture mindshare. If Zig becomes mainstream then this could be the standard book that everyone recommends. Not just that, it’s more likely the language succeeds if it has good learning materials - that’s an outcome the author would love.
> people make bets on new tech all the time. You show up early so you can capture mindshare.
I got on the ground floor with elixir. got my startup built on it. now we have 3 fulltime engineers working on elixir fulltime. None of that would have happenned if I looked at a young language and said "its not used in the real world"
"nobody uses in the real world yet" is uncharitable, as Zig is used in many real-world projects (Bun and Tigerbeetle are written in Zig, for example). But there's value being at the forefront of technologies that you think are going to explode soon, so that's how people find time and energy, I guess.
Why does this feel like an ad? I've seen pangram mentioned a few times now, always with that tagline. It feels like a marketing department skulking around comments.
The other pangram mention elsewhere in this comment section is also me -- I'm totally unaffiliated with them, just a fan of their tool
I specify the accuracy and false positive rate because otherwise skeptics in comment sections might otherwise think it's one of the plethora of other AI detection tools that don't really work
FWIW I work on AI and I also trust Pangram quite a lot (though exclusively on long-form text spanning at least 4 or more paragraphs). I'm pretty sure the book is heavily AI written.
SAME. I was looking for a donation button myself! I've paid for worse quality instructional material. this is just the sort of thing I'm happy to support
I submitted this and unfortunately it is likely AI generated. The authors github history suggests it at the very least, along with seemingly misunderstanding a reference to a Zig language feature (labeled blocks - https://zig.guide/language-basics/labelled-blocks/) in the project issues (https://github.com/zigbook/zigbook/issues/4).
I’m not sure how much value is to be had here, and it’s unfortunate the author wasn’t honest about how it was created.
I wish I wouldn’t have submitted this so quickly but I was excited about the new resource and the chapters I dug into looked good and accurate.
I worry about whether this will be maintained, if there are hallucinations, and if it’s worth investing time into.
We really are in the trenches. How is this garbage #1 on the front page of *HN* right now?
Even if it was totally legitimate, the "landing page" (its design) and the headline ("Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software."?????) should discredit it immediately.
Yes, this should immediately set off all BS alarms.
> if there are hallucinations
Plenty. I assumed that the code examples had been cleaned up manually, so instead I looked at a few random "Caveats, alternatives, edge cases" sections. These contain errors typically made by LLMs, such as suggesting to use features that doesn't exist (std.mem.terminated), are non-public (argvToScriptCommandLineWindows) or removed (std.BoundedArray). These sections also surfaces irrelevant stdlib and compiler implementation details.
This looks like more data towards the "LLMs were involved" side of the argument, but as my other comment pointed out, that might not be an issue.
We're used to errata and fixing up stuff produced by humans, so if we can fix this resource, it might actually be valuable and more useful than anything that existed before it. Maybe.
One of my things with AI is that if we assume it is there to replace humans, we are always going to find it disappointing. If we use it as a tool to augment, we might find it very useful.
A colleague used to describe it (long before GenAI, when we were talking about technology automation more generally) as following: "we're not trying to build a super intelligent killer robot to replace Deidre in accounts. Deidre knows things. We just want to give her better tools".
So, it seems like this needs some editing, but it still has value if we want it to have value. I'd rather this was fixed than thrown away (I'm biased, I want to learn systems programming in zig and want a good resource to do so), and yes the author should have been more upfront about it, and asked for reviewers, but we have it now. What to do?
There's a difference between the author being more upfront about it and straight-up lying on multiple locations that zero AI is involved. It's stated on the landing page, documentation and GitHub - and there might be more locations I havent' seen.
Personally, I would want no involvement in a project where the maintainer is this manipulative and I would find it a tragedy if any people contributed to their project.
> and yes the author should have been more upfront about it
They should not have lied about. That's not someone I would want to trust and support. There's probably a good reason why they decided to stay anonymous.
I literally just came across this resource a couple of days ago and was going to go through it this week as a way to get up to speed on Zig. Glad this popped up on HN so I can avoid the AI hallucinations steering me off track.
I also submitted it to Reddit before realizing it largely was AI generated
Created https://github.com/zigbook/zigbook/issues/18
AI is really great at creating superficial signals. And most people just judge things by superficial signal.
I looked into that project issue your referencing. There is absolutely zero mentioning of zig labeled blocks in that exchange. There is no misunderstanding or confusion whatsoever.
Are you sure? Right on https://www.zigbook.net/chapters/00__zigbook_introduction it says:
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
The author could of course be lying. But why would you use AI and then very explicitly call out that you’re not using AI?
> Are you sure?
There are too many things off about the origin and author to not be suspicious of it. I’m not sure what the motivation was, but it seems likely. I do think they used the Zig source code heavily, and put together a pipeline of some sort feeding relevant context into the LLM, or maybe just codex or w/e instructed to read in the source.
It seems like it had to take quite a bit of effort to make, and is interesting on its own. And I would trust it more if I knew how it was made (LLMs or not).
As another suspicious data point see this issue by the author: https://github.com/microsoft/vscode/issues/272725
Edit: https://news.ycombinator.com/item?id=45952581 found some concrete issues
Because AI content is at minimum controversial nowadays. And if you are ok with lying about authorship then It is not further down the pole to embelish the lie a bit more
I'd love it if we can stop the "Oh, this might be AI, so it's probably crap" thing that has taken over HN recently.
1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
2. Even if it were AI generated, that does not automatically make it worthless. In fact, this looks pretty decent as a resource. Producing learning material is one of the few areas we can likely be confident that AI can add value, if the tools are used carefully - it's a lot better at that than producing working software, because synthesising knowledge seen elsewhere and moving it into a new relatable paradigm (which is what LLMs do, and excel at), is the job of teaching.
3. If it's maintained or not is neither here nor there - can it provide value to somebody right now, today? If yes, it's worth sharing today. It might not be in 6 months.
4. If there are hallucinations, we'll figure them out and prove the claim it is AI generated one way or another, and decide the overall value. If there is one hallucination per paragraph, it's a problem. If it's one every 5 chapters, it might be, but probably isn't. If it's one in 62 chapters, it's beating the error rate of human writers quite some way.
Yes, the GitHub history looks "off", but maybe they didn't want to develop in public and just wanted to get a clean v1.0 out there. Maybe it was all AI generated and they're hiding. I'm not sure it matters, to be honest.
But I do find it grating that every time somebody even suspects an LLM was involved, there is a rush of upvotes for "calling it out". This isn't rational thinking. It's not using data to make decisions, its not logical to assume all LLM-assisted writing is slop (even if some of it is), and it's actually not helpful in this case to somebody who is keen to learn zig to decide if this resource is useful or not: there are many programming tutorials written by human experts that are utterly useless, this might be a lot better.
> 1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
There is, actually, You may copy the introduction to Pangram and it will say 100% AI generated.
That's not evidence, at least not evidence that would stand up to a peer review if the author were to refute it.
It's not proof but it's definitely evidence.
When I give my own writings to Pangram, it says 100% human.
> 2. Even if it were AI generated, that does not automatically make it worthless.
It does make it automatically worthless if the author claims it's hand made. How am I supposed to trust this author if they just lie about things upfront? What worth does learning material have if it's written by a liar? How can I be sure the author isn't just lying with lots of information throughout the book?
The biggest red flag for me is the author hiding their name. If you wrote quality book about a programming language you are not hiding your identity from the world.
> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
I'm not sure what they expect, but to me Zig looks very much like C with a modern standard lib and slightly different syntax. This isn't groundbreaking, not a thought paradigm which should be that novel to most system engineers like for example OCaml could be. Stuff like this alienates people who want a technical justification for the use of a language.
There is nothing new under the Sun. However, some languages manifest as good rewrites of older languages. Rust is that for C++. Zig is that for C.
Rust is the small, beautiful language hiding inside of Modern C++. Ownership isn't new. It's the core tenet of RAII. Rust just pulls it out of the backwards-compatible kitchen sink and builds it into the type system. Rust is worth learning just so that you can fully experience that lens of software development.
Zig is Modern C development encapsulated in a new language. Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions. All realtime development has to rewrite their entire standard libraries, like with the EASTL.
On top of the great standard library design, you get comptime, native build scripts, (err)defer, error sets, builtin simd, and tons of other small but important ideas. It's just a really good language that knows exactly what it is and who its audience is.
I think that describing Zig as a "rewrite of C" (good or otherwise) is as helpful as describing Python as a rewrite of Fortran. Zig does share some things with C - the language is simple and values explicitness - but at its core is one of the most sophisticated (and novel) programming primitives we've ever seen: A general and flexible partial evaluation engine with access to reflection. That makes the similarities to C rather superficial. After all, Zig is as expressive as C++.
> Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions
I think that is just a symptom of a broader mistake made by C++ and shared by Rust, which is a belief (that was, perhaps, reasonable in the eighties) that we could and should have a language that's good for both low-level and high-level programming, and that resulted in compromises that disappoint both goals.
I don't know man, Rust's borrowing semantics are pretty new under the sun, and actually do change the way you think about software. It's a pretty momentous paradigm shift.
Zig is nice too, but it's not that.
To call Rust syntax beautiful is a stretch. It seems that way in the beginning but then quickly devolves into a monstrosity when you start doing more complex things.
Zig on the other specifically addresses syntax shortcomings in part of C. And it does it well. That claim of rust making C more safe because it’s more readable applies to Zig more than it does to Rust.
I feel like the reason the rust zealots lobby like crazy to embed rust everywhere is twofold. One is that they genuinely believe in it and the other is that they know that if other languages that address one of the main rust claims without all the cruft gains popularity they lose the chance of being permanently embdedded in places like the kernel. Because once they’re in it’s a decade long job market
> they know that if other languages that address one of the main rust claims without all the cruft gains popularity they lose the chance of being permanently embdedded in places like the kernel
First of all, I'm really opposed to saying "the kernel". I am sure you're talking about the Linux kernel, but there are other kernels (BSD, Windows etc.) that are certainly big enough to not call it "the" kernel, and that may also have their own completely separate "rust-stories".
Secondly, I think the logic behind this makes no sense, primarily because Rust at this point is 10 years old from stable and almost 20 years old from initial release; the adoption into the Linux kernel wasn't exactly rushed. Even if it was, why would Rust adoption in the Linux kernel exclude adoption of another language as well, or a switch to another, if it's better? The fact that Rust was accepted at all to begin with aside from C disproves the assumption, because clearly that kernel is open for "better" languages.
The _simplest_ explanation to why Rust has succeeded is that it's solves actual problems, not that "zealots" are lobbying for it to ensure they "have a job".
> Rust at this point is 10 years old from stable
Rust is not stable even today! There is no spec, no alternative implementations, no test suite... "Stable" is what "current compiler compiles"! Existing code may stop compiling any day....
Maybe in 10 years it may become stable, like other "booring" languages (Golang and Java).
Rust stability is why Linus opposes its integration into kernel.
In the "other good news department", GCC is adding a Rust frontend to provide the alternative implementation, and I believe Rust guys accepted to write a specification for the language.
I'm waiting for gccrs to start using the language, actually.
> To call Rust syntax beautiful is a stretch.
I'm no Rust fan, but beauty of a syntax is always in the eye of the beholder.
I personally find Go, C++ and Python's syntax beautiful. All can be written in very explicit or expressive forms. On the other hand, you can hide complexity to a point.
If you are going to do complex things in a compact space, you'll asymptotically approach Perl or PCRE. It's maths.
All code is maths, BTW.
> To call Rust syntax beautiful is a stretch.
I don’t see where the comment you’re replying to does that (was it edited?). Their comment says nothing about aesthetics.
Higher up
> Rust is the small, beautiful language hiding inside of Modern C++
For my part, I don't know why, but Zig's syntax feels wrong to me. I don't even know why. I really want to like its syntax, as Zig seems really promising to me, but I just don't, which makes it not very enjoyable for me to write.
I don't know if it's my lack of practice, but I never felt the same about, say, Rust's syntax, or the syntax of any other language for that matter.
> if other languages that address one of the main rust claims without all the cruft
But regardless of how much one likes Zig, it addresses none of the problems that Rust seeks to solve. It's not a replacement for Rust at all, and isn't suitable for any of the domains where Rust excels.
> and isn't suitable for any of the domains where Rust excels.
That's a pretty bold claim since Zig is specifically designed for systems programming, low level stuff, network services, databases (think Tigerbeetle). It's not memory safe like Rust is, but it comes with constructs that make it simple to build largely memory safe programs.
> It's not memory safe like Rust is, but it comes with constructs that make it simple to build largely memory safe programs.
Right, this is the specific important thing that Rust does that Zig doesn't (with the caveat that Rust includes the `unsafe` mechanism - as a marked, non-default option - specifically to allow for necessary low-level memory manipulation that can't be checked for correctness by the compiler). Being able to guarantee that something can't happen is more valuable than making it simple to do something correctly most of the time.
Sure but there's this belief in the Rust community that it's not responsible anymore to write software that isn't memory safe on the same level as Rust.
So Zig would fail that, but then you could also consider C++ unsuitable for production software - and we know it clearly is still suitable.
I predict Zig will just become more and more popular (and with better, although not as complete- memory safety), and be applied to mission critical infra.
If we ignore recent movents in govermental cybersecurity agencies, and big tech to move away from unsafe programming languages, as much as technically possible.
Introducing a language with the same safety as Modula-2 or Object Pascal, would make sense in the 1990's, nowadays with improved type systems making the transition from academia into mainstream, we (the industry) know better.
It is not only Rust, it is Linear Haskell, OCaml effects, Swift 6 ownership model, Ada/SPARK, Chapel,....
Of those listed, I'd bet Swift (having had experience with it) is the most pleasant to work with. I just hope it takes off on the systems and backend side at some point.
It's not that simple though, Zig has equivalent spatial memory safety which prevents issues that are pretty consistently among (or at) the top of the list for most dangerous vulnerability classes.
And while I don't have enough experience with Rust to claim this first hand, my understanding is that writing correct unsafe Rust code is at least an order of magnitude harder than writing correct Zig code due to all of the properties/invariants that you have to preserve. So it comes with serious drawbacks, it's not just a quick "opt out of the safety for a bit" switch.
> Being able to guarantee that something can't happen is more valuable than making it simple to do something correctly most of the time.
Of course, all other things being equal, but they're not.
> And while I don't have enough experience with Rust to claim this first hand, my understanding is that writing correct unsafe Rust code is at least an order of magnitude harder than writing correct Zig code due to all of the properties/invariants that you have to preserve.
How do you make such boldly dismissive assertions if you don't have enough experience with Rust? You are talking as if these invariants are some sort of requirements/constraints that the language imposes on the programmer. They're not. It's a well-known guideline/paradigm meant to contain any memory safety bugs within the unsafe blocks. Most of the invariants are specific to the problem at hand, and not to the programming language. They are conditions that must be met in any language - C and Zig are no exceptions. Failure to adhere to them will land you in trouble, no matter what sort of safety your language guarantees. They are often talked about in the context of Rust because the ones related to memory-unsafe operations can be tackled and managed within the small unsafe blocks, instead of being sprawling it throughout the code base.
> So it comes with serious drawbacks, it's not just a quick "opt out of the safety for a bit" switch.
Rust is not the ultimate solution to every problem in the world. But this sort of exaggeration and hyperbole is misleading and doesn't help anyone choose any better.
I would compare the recent Rust Android post [1], where they have a 5000x lower memory vulnerability rate compared to traditional C/C++ with the number of segfaults found in Bun. [2]
In my opinion Zig does not move the needle on real safety when the codebase becomes sufficiently complex.
[1]: https://security.googleblog.com/2025/11/rust-in-android-move...
[2]: https://github.com/oven-sh/bun/issues?q=segfault%20OR%20segm...
Given the density of memory issues in the Bun issue tracker I have a hard time squaring the statement that Zig makes it "easy" to build memory safe programs.
https://github.com/oven-sh/bun/issues?q=segfault%20OR%20segm...
Rust is not designed for low level system programming / embedded systems like Zig is. It is designed to make a browser and software that share requirements with making a browser.
There is some overlap but that's still different. The Zig approach to memory safety is to make everything explicit, it is good in a constrained environment typical of embedded programming. The Rust approach is the opposite, you don't really see what is happening, but there are mechanisms to keep your safe. It is good for complex software with lots of moving parts in an unconstrained environment, like a browser.
For a footgun analogy, one will hand you a gun that will never go off unless you aim and pull the trigger, so you can shoot your foot, but no sane person will. It is a good sniper rifle. The Rust gun can go off at any time, even when you don't expect it, but it is designed in such a way that it will never happen when it is pointed at your foot, even if you aim it there. It is a good machine gun.
> Rust is not designed for low level system programming / embedded systems like Zig is.
Pray tell, with Rust already being used in kernels, drivers, and embedded what makes Zig better suited for low-level systems?
More chance to explode a UB in your hand? For that, there is C.
Great C interop, first class support for cross-compilation, well suited for arena allocators.
You can use Rust in kernel/embedded code, you can also use C++ (I did) and even Java! but most prefer to use C, and I think that Zig is a better alternative to C for those in the field.
There is still one huge drawback with Zig and that's maturity. Zig is still in beta, and the closest you get to the metal, the more it tends to matter. Hardware projects typically have way longer life cycles and the general philosophy is "if it ain't broke, don't fix it". Rust is not as mature as C by far, there is a reason C is still king, but at least, it is out of beta and is seeing significant production use.
I remember when I talk about Zig to the CTO of the embedded branch of my company. His reaction was telling. "I am happy to hear someone mention Zig, it is a very interesting language and it is definitely on my watch list, but not mature enough to invest in it". He was happy that I mentioned Zig because in the company, the higher ups are all about Rust because of the hype, even though we do very little of if BTW, it is still mostly C and C++. And yeah, hype is important, customers heard about Rust as some magical tech that will make the code bug-free, they didn't hear about Zig, so Rust sells better. In the end, they go for C anyways.
Kind of, https://en.wikipedia.org/wiki/Cyclone_(programming_language)
What it has achieved is making affine types something mainstream developers would care about.
That kind of is a bit load bearing. The differences are pretty huge. Plus, borrow checker is nowhere to be found. Cyclone is more C with a few tweaks (tagged unions, generics, regions, etc.).
Borrow checking is basically a synonym for affine type system.
The same outcome can be achieved via affine types, linear types, effects, dependent types, regions, proofs, among many other CS research in type systems.
Which is why following Rust's success, plenty of managed languages are now going through the evolution step to combine automatic resource management with improved type systems.
Taking the one that best approaches their current design.
There were languages with lifetimes and borrowing mechanics before Rust. Rust packages these mechanics in a nice way. Just like Zig encodes many niceties in a useful C language (comptime, simple cross-compilation, stdlib).
Which ones?? Before Rust, to my knowledge, no language had an actually practical way to use lifetimes and borrow-checking so that both memory safety and concurrency safety (data races, which is huge) were solved, even though the concepts were known in research. Doing the actual work to make it practical is what makes the difference between some obscure research topic and a widely used language that actually solves serious problems in the real world.
Cyclone for one, which AT&T created exactly to replace C.
Yeah but is that a practical language people can use instead of C and Rust? I’ve always heard of it only as a research language that inspired rust but nothing else.
Outside AT&T until they ramped down the project, I guess not, Rust also took its time to actually take off beyond Mozilla, and is around because it was rescued by big tech (Amazon, Google, Microsoft,...) hiring most of the core team after Mozilla's layoffs.
> actually do change the way you think about software. It's a pretty momentous paradigm shift.
That's true for people who doesn't read and think about the code they write. For people who think from the perspective of a computer, Rust is "same checks, but forced by the compiler".
Make no mistake, to err is human, but Rust doesn't excite me that much.
> Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions
Funny. This was a great sell to me. I wonder why it isn’t the blurb. Maybe it isn’t a great sell to others.
The problem for me with so many of these languages is that they’re always eager to teach you how to write a loop when I couldn’t care less and would rather see the juice.
However, nowadays with comprehensive books like this, LLM tools can better produce good results for me as I try it out.
Thank you.
Very, very few people outside of foundational system software, HFT shops, and game studios understand why it's a great selling point. Everyone else likes the other points and don't realize the actual selling point of the language.
>Rust is that for C++
No it's not. Rust has roots in functional languages. It is completely orthoganol to C++.
Graydon Hoare, a former C++ programmer on Mozilla Firefox and the original creator of Rust, acknowledges that for many people, Rust has become a viable alternative to C++ :
https://graydon2.dreamwidth.org/307291.html
And on slide #4, he mentions that "C++ is well past expiration date" :
https://venge.net/graydon/talks/intro-talk-2.pdf
It's possible that Graydon's earliest private versions of Rust the 4 years prior to that pdf were an OCaml-inspired language but it's clear that once the team of C++ programmers at Mozilla started adding their influences, they wanted it to be a cleaner version of C++. That's also how the rest of the industry views it.
> Rust has become a viable alternative to C++
Alternative yes, derivative no. Rust doesn't approach C++'s metaprogramming features, and it probably shouldn't given how it seems to be used. It's slightly self-serving for browser devs to claim Rust solves all relevant problems in their domain and therefore eclipses C++, but to me in the scientific and financial space it's a better C, making tradeoffs I don't see as particularly relevant.
I say this as a past contributor to the Rust std lib.
> Rust and C++'s biggest mistake, not passing allocators into containers and functions
Rather, basing its entire personality around this philosophy is Zig's biggest mistake. If you want to pass around allocators in C++ or Rust, you can just go ahead and do that. But the reason people don't isn't because it's impossible in those languages, it's because the overwhelming majority of the time it's a lot of ceremony for no benefit.
Like, surely people see that in C itself there's nothing stopping anyone from passing around allocators, and yet almost nobody ever does. Ever wonder why that is?
Much of the book's copy appears to have been written by AI (despite the foreword statement that none of it was), which explains the hokey overenthusiasm and exaggerations.
As we know AI is at least as smart as the average human. It knows the Zeitgeist and thus adds “No AI used” in order to boost “credibility”. :) (“credibility” since AI is at least as smart the average human, for us in the know.)
That's ok, in the near future nobody will actually read this book. AI will be reading it. This is training data.
For those who actually want to learn languages which are "fundamentally changing how you think about software", I'd recommend the Lisp family and APL family.
And Prolog as well.
I'd also throw Erlang/Elixir out there. And I really wished Elm wasn't such a trainwreck of a project...
No need to include Elixir here; none of the important bits that will change how you view software come from Elixir, it's just a skin on top of Erlang (+ some standard library wrappers) and that's it.
I'd argue more people use Elixir over Erlang at this point. Sure its just an abstraction on top of Erlang, but people learn through Elixir nowadays, not through Erlang.
If you want to learn the actual mind changing aspects of the BEAM, clearly learning the simpler, smaller language with a more direct route to the juice is the way to go. Hence Erlang, not Elixir. I learned Elixir first back in 2015, and then learned Erlang, and have had the pleasure of using both in production. When all was said and done I really think Erlang was better, especially over a long enough time frame.
As a general point I'd like to state that I don't think it really matters what "people" do when you're learning for yourself. In the grand scheme of things approximately no one uses the BEAM, but this doesn't mean that learning how to use it is somehow pointless.
What is the most optimal Erlang/Elixir you can think of regarding standardized effect systems for recording non-determinism, replaying and reversible computing? How comparable are performance numbers of Erlang/Elixir with Java and wasm?
I'd recommend asking the Elixir community about this as I didn't even understand your question. I am by no means a professional with Erlang/Elixir. I threw it out there because these language force you to think differently compared to common OOP languages.
And Forth.
And 6502 assembly. ;)
And SNOBOL.
And Icon.
And ...
Am I correct that you can essentially "learn APL without learning APL" by just learning Numpy / Pytorch?
I looked at array languages briefly, and my impression was that"ooh this is just Numpy but weirder."
Not even close. While Numpy has many similar operations, it lacks the terseness, concepts like trains and forks etc. Modern APL style doesn't use... control flow (neither recursion nor loops nor if...) and often avoids variables (tacit/point-free style).
You might enjoy this video: https://www.youtube.com/watch?v=a9xAKttWgP4
Zig is so novel that it's hard to find any language like it. Its similarity to C is superficial. AFAIK, it is the first language ever to rely on partial evaluation so extensively. Of course, partial evaluation itself is not new at all, but neither were touchscreens when the iPhone came out. The point wasn't that it had a touchscreen, but that it had almost nothing but. The manner and extent of Zig's use of partial evaluation are unprecedented. I have nothing against OCaml, but it is a variant of ML, a 1970s language, that many undergrads were taught at university in the nineties.
I'm not saying everyone should like Zig, but its design is revolutionary:
https://news.ycombinator.com/item?id=45852774
> It is about fundamentally changing how you think about software.
> I'm not sure what they expect, but to me Zig looks very much like C
Yes. I think people should sincerely stop with this kind of wording.
That makes Zig looks like some kind of cult.
Technically speaking, Zig democratized the concept of imperative compile time meta-programming (which is an excellent thing).
For everything else, this is mainly reuse and cherry pick from other languages.
I guess comptime is a little different but yeah I wouldn't say it fundamentally changes how you think about software.
I wouldn't say that about OCaml either really though. It's not wildly different in the way that e.g. Lean's type system, or Rust's borrow checker or Haskell's purity is.
D has similar comptime capabilities if I recall correctly and proceeds Zig by almost 2 decades or so.
I don't think it's the same. You can do template metaprogramming, but Zig lets you use Zig itself which is a lot nicer.
I'm not a D programmer though so I could be wrong.
Zig community really tries to match Rust one in terms of cult resemblance.
Did it occur to you that Rust and Zig might actually be very good?
People that consider other people that are excited about something "culty" are usually people that themselves are excited by absolutely nothing.
That's a very profound statement! Logical and sounds intelligent. Nice one!
PS: I'm stealing it, by the way.
> Most programming languages hide complexity from you
A garbage collector isn't "hidden complexity". We all know they exist and use memory and CPU.
How Zig is better than Ruby:
- it has a linear performance gain
- the IDE has more information
That's great if you absolutely need it, but in a lot of cases:
- you don't need linear performance gains or you need the gains to be more than linear
- the linear performance gains come at a huge cost: code readability (assuming you managed to write it and it compiles)
- relying too much on the IDE won't make better programs and won't make you a better programmer
This looks fantastic. Pedagogically it makes sense to me, and I love this approach of not just teaching a language, but a paradigm (in this case, low-level systems programming), in a single text.
Zig got me excited when I stumbled into it about a year ago, but life got busy and then the io changes came along and I thought about holding off until things settled down - it's still a very young language.
But reading the first couple of chapters has piqued my interest in a language and the people who are working with it in a way I've not run into since I encountered Ruby in ~2006 (before Rails hit v1.0), I just hope the quality stays this high all the way through.
> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
Written by ChatGPT?
> You came for syntax. You'll leave with a philosophy.
So many comments about the AI generation part. Why does it matter? If it’s good and accurate and helpful why do you care? That’s like saying you used a calculator to calculate your equations so I can’t trust you.
I am just impressed by the quality and details and approach of it all.
Nicely done (PS: I know nothing about systems programming and I have been writing code for 25 years)
Because site site explicitly says:
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
If the site would have said something like "We use AI to clean up our prose, but it was all audited thoroughly by a human after", I wouldn't have an issue. Even better if they shared their prompts.
Exactly. Hard to trust the contents of a book that starts with a lie.
> Why does it matter?
Because AI gets things wrong, often, in ways that can be very difficult to catch. By their very nature LLMs write text that sounds plausible enough to bypass manual review (see https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...), so some find it best to avoid using it at all when writing documentation.
But all those "it's AI posts" are about the prose and "style", not the actual content. So even if (and that is a big if) the text was written using the help of AI (and there are many valid reasons to use it, e.g. if you're not a native speaker) that does not mean the content was written from AI and thus contains AI mistakes.
If it was so obviously written by AI then finding those mistakes should be easy?
The style is the easiest thing to catch for people; GP has said that the technical issues can be more difficult to find, especially in longer texts; there are times where it indeed are caught.
Passing even correct information through an LLM may or may not taint it; it may create sentences which on first glance are similar, but may have different, imprecise meaning - specific wording may be crucial in some cases. So if the style is under question, the content is as well. And if you can write the technically correct text at first, why would you put it through another step?
Humans get things wrong too.
Quality prose usually only becomes that after many reviews.
AI tools make different types of mistakes than humans, and that's a problem. We've spent eons creating systems to mitigate and correct human mistakes, which we don't have for the more subtle types of mistakes AI tends to make.
AI gets things wrong ("hallucinates") much more often than actual subject matter experts. This is disingenuous.
Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output.
A serious one yes.
But why would a serious person claim that they wrote this without AI when it's obvious they used it?!
Using any tool is fine, but someone bragging about not having used a tool they actually used should make you suspicious about the amount of care that went to their work.
Fortunately, we can't just get rid of humans (right?) so we have to use them _somehow_
If AI is used by “fire and forget”, sure - there’s a good chance of slop.
But if you carefully review and iterate the contributions of your writers - human or otherwise - you get a quality outcome.
Absolutely.
But why would you trust the author to have done that when they are lying in a very obvious way about not using AI?
Using AI is fine, it's a tool, it's not bad per se. But claiming very loud you didn't use that tool when it's obvious you did is very off-putting.
That’s fine. Write it out yourself and then ask an AI how it could be improved with a diff. Now you’ve given it double human review (once in creation then again reviewing the diff) and single AI review.
That's one review with several steps and some AI assistance. Checking your work twice is not equivalent to it having it reviewed by two people, part of reviewing your work (or the work of others) is checking multiple times and taking advantage of whatever tools are at your disposal.
Because the first thing you see when you click the link is "Zero AI" pasted under the most obviously AI-generated copy I've ever seen. It's just an insult to our intelligence, obviously we're gonna call OP out on this. Why lie like that?
It's funny how everyone has gaslit themselves into doubting their own intuitions on the most blatant specimen where it's not just a mere whiff of the reek but an overpowering pungency assaulting the senses at every turn, forcing themselves to exclaim "the Emperor's fart smells wonderful!"
It matters because, it irritates me to no end that I have to review AI generated content that a human did not verify before. I don't like being made to work in the guise of someone giving me free content.
> That’s like saying you used a calculator to calculate your equations so I can’t trust you.
A calculator exists solely for the realm of mathematics, where you can afford to more or less throw away the value of human input and overall craftsmanship.
That is not the case with something like this, which - while it leans in to engineering - is in effect viewed as a work of art by people who give a shit about the actual craft of writing software.
If you believed that you wouldn't explicitly say there was no AI generated content at all, you'd let it speak for itself.
> Why does it matter?
I am just a human supremacist.
>That’s like saying you used a calculator to calculate your equations so I can’t trust you.
No it isn't. My TI-83 is deterministic and will give me exactly what I ask for, and will always do so, and when someone uses it they need to understand the math first or otherwise the calculator is useless.
These AI models on the other hand don't care about correctness, by design don't give you deterministic answers, and the person asking the question might as well be a monkey as far as their own understanding of the subject matter goes. These models are if anything an anti-calculator.
As Dijkstra points out in his fantastic essay on the idiocy of natural language "computation", what you are doing is exactly not computation but a kind of medieval incantation. Computers were designed to render impossible precisely the nonsense that LLMs produce. The biggest idiot on earth will still get a correct result from the calculator because unlike the LLM it is based on boolean logic, not verbal or pictorial garbage.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
An awful lot of commenters are convinced that it's AI-generated, despite explicit statements to the contrary. Maybe they're wrong, maybe they're right, but none of them currently have any proof stronger than vibes. It's like everyone has gaslit themselves into thinking that humans can't write well-structured neutral-tone docs any more.
This is not written in a neutral-tone at all! There is a lot of bland marketing speech that feels completely out of place. This is not how you write good technical literature.
Many people have already shown the hallucinated apis that is much stronger evidence than your "vibes".
I suppose the author may have deliberately added the "No AI assistance" notice - making sure all the hallucinated bugs are found via outraged developers raising tickets. Without that people may not even have bothered.
I value human work and I do NOT value work that has been done with heavy AI usage. Most AI things I've seen are slop, I instantly recognize AI songs for example. I just dont want anything to do with it. The uniqueness of creative work is lost with using AI.
Insecurity, that's why.
I too have this feeling sometimes. It's a coping mechanism. I don't know why we have this but I guess we have to see past it and adapt to reality.
> [Learning Zig] is about fundamentally changing how you think about software.
Learning LISP, Fortran, APL, Perl, or really any language that is different from what you’re used to, will also do this for you.
I'd add Prolog to that list; but Fortran and Perl aren't all that different from other procedural languages.
Kind of, Perl was one of the first languages to bring FP to the masses, while Lisp and ML languages were hardly looked up by them.
See https://hop.perl.plover.com/book/
I agree, I love zig but the things that make me program differently are features like excellent enum/union support, defer and comptime, which aren't readily available in the other languages I tend to use (C++, Fortran and Python).
It's pretty incredible how much ground this covers! However, the ordering feels a little confusing to me.
One example is in chapter 1. It talks about symbol exporting based on platform type, without explaining ELF. This is before talking about while loops.
It's had some interesting nuggets so far, and I've followed along since I'm familiar with some of the broad strokes, but I can see it being confusing to someone new to systems programming.
It looks cool! No experience with Zig so can't comment on the accuracy, but I will take a look at it this week. Also a bit annoying that there is no PDF version that I could download as the website is pretty slow. After taking a look at the repository (https://github.com/zigbook/zigbook/tree/main), each page seems to be written in AsciiDoc, so I'll take a look about compiling a PDF version later today.
If there is a PDF version, please remember to give me one. Thank you in advance.
zigbook.pdf => https://files.catbox.moe/gobtw7.pdf
HOWTO: The text can be found per-chapter in `./pages/{chapter}.adoc` but each chapter includes code snippets found in a respective `./chapters-data/code/{chapter}/` subdirectory. So, perhaps a hacky way to do it but quite lazy to fully figure asciidoctor flags, created using a script a combined book.adoc that includes all others with `include::{chapter}.adoc` directives, then run `asciidoctor-pdf -a sourcedir=../chapters-data/code -r asciidoctor-diagram -o book.pdf ./pages/book.adoc`.
Welp. I wish I had read the comments first to discover that this is AI generated. On the other hand, I got to experience the content without bias.
I opted to give it a try instead of reading the comments and the book was arranged in a super strange way where it's discussing concepts that a majority of programmers would never be concerned with when starting out with learning a language. It's very different to learn about some of these concepts if you are reading a language doc in order to work on the language itself. But if you want to learn how to use the language, something like:
is absolutely never going to be something you dump into chapter 1. I skimmed through a few chapters from there and it's blocks of stuff thrown in randomly. The introduction to the if conditional throws in Zig Intermediate Representation with absolutely no explanation of what it is and why it's even being discussed.Came here to comment that this has been written pretty poorly or just targets a very niche audience and now I discover it's slop. What a waste of time. The one thing AI was supposed to save.
Very well done! wow! Thanks for this. Going through this now.
One comment: About the syntax highlighting, the dark blue for keywords against a black background is very difficult to read. And if you opt for the white background, the text becauses off white / grey which again is very difficult to read.
"The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices."
Such a bald-faced and utterly disgusting lie. The introduction itself ticks every single flag of AI generated slop. AI is trained well on corporate marketing brochures.
The gratuit accusations in this thread should be flagged.
Hmm, the explanation of Allocators is much more detailed in the book, but I feel although more compact, it seems much more reasonable in the language reference. [0]
I'll keep exploring this book though, it does look very impressive.
0 - https://ziglang.org/documentation/master/#Memory
It's really hard to believe this isn't AI generated, but today I was trying to use the HTTP server from std after the 0.15 changes, couldn't figure out how it's supposed to work until I've searched repos in Github. LLM's couldn't figure it out as well, they were stuck in a loop of changing/breaking things even further until they arrived at the solution of using the deprecated way. so I guess this is actually handwritten which is amazing because it looks like the best resource I've seen up until now for Zig
> It's really hard to believe this isn't AI generated
Case of a person who is relying on LLMs so much he cannot imagine doing something big by themselves.
it's not only the size - it was pushed all at once, anonymously, using text that highly resembles that of an AI. I still think that some of the text is AI generated. perhaps not the code, but the wording of the text just reeks of AI
> it was pushed all at once
For some of my projects I develop against my own private git server, then when I'm ready to go public, create a new git repo with a fully squashed history. My early commits are basically all `git commit -m "added stuff"`
Can you provide some examples where the text reeks of AI?
Literally the heading as soon as you click the submitted link
> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
The "it's not X, it's Y" phrasing screams LLM these days
It's almost as though the LLMs were trained on all the writing conventions which are used by humans and are parroting those, instead of generating novel outputs themselves.
They haven’t picked up any one human writing style, they’ve converged on a weird amalgamation of expressions and styles that taken together don’t resemble any real humans writing and begin to feel quite unnatural.
The Uncanny Valley of prose.
Plenty of people use “it’s not X, it’s Y”
As someone who uses em-dashes a lot, I’m getting pretty tired of hearing something “screams AI” about extremely simple (and common) human constructs. Yeah, the author does use that convention a number of times. But that makes sense, if that’s a tool in your writing toolbox, you’ll pull it out pretty frequently. It’s not signal by itself, it’s noise. (does that make me an AI!?) We really need to be considering a lot more than that.
Reading through the first article, it appears to be compelling writing and a pretty high quality presentation. That’s all that matters, tbh. People get upset about AI slop because it’s utterly worthless and exceptionally low quality.
https://www.zigbook.net/chapters/45__text-formatting-and-uni...
The repetitiveness of the shell commands (and using zig build-exe instead of zig run when the samples consist of short snippets), the filler bullet points and section organization that fail to convey any actual conceptual structure. And ultimately throughout the book the general style of thought processes lacks any of the zig community’s cultural anachronisms.
If you take a look at the repository you’ll also notice baffling tech choices not justified by the author that runs counter against the zig ethos.
(Edit: the build system chapter is an even worse offender in meaningless cognitively-cluttering headings and flowcharts, it’s almost certainly entirely hallucinated, there is just an absurd degree of unziglikeness everywhere: https://www.zigbook.net/chapters/26__build-system-advanced-t... -- What’s with the completely irrelevant flowchart of building the zig compliler? What even is the point of module-graph.txt? And icing on the cake in the “Vendoring vs Registry Dependencies” section.)
The repetitiveness suggests copying and pasting, not an LLM. I actually find LLMs unlikely to do this.
I read the first few paragraphs. Very much reads like LLM slop to me...
E.g., "Zig takes a different path. It reveals complexity—and then gives you the tools to master it."
If we had a reliable oracle, I would happily bet a $K on significant LLM authorship.
Yeah and then why would they explicitly deny it? Maybe the AI was instructed not to reveal its origin. It's painful to enjoy this book if I know it's likely made by an LLM.
I've had the same experience as you with Zig. I quite love the idea of it Zig but the undocumented churn is a bit much. I wish they had auto generated docs that reflect the current state of the stdlib, at least. Even if it just listed the signatures with no commentary.
I was trying to solve a simple problem but Google, the official docs, and LLMs were all out of date. I eventually found what I needed in Zig's commit history, where they casually renamed something without updating the docs. It's been renamed once more apparently, still not reflected in the docs :shrugs:.
Wait, doesn't `zig std` launch the autogenerated docs?
It’s currently broken, or was recently on the 0.16 dev branch (master)
But you can tell your LLM to just go look at the source code (after checking it out so it doesn’t try 20s github requests). Always works like a charm for me.
>Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
Zig is just C with a marketing push. Most developers already know C.
I suspect most developers do not know C.
C is fine C++ is where they jumped the shark
C++ is far better than C in very many ways. It's also far worse than C in very many other ways. Given a choice between the two, I'd still choose C++ every day just for RAII. There's only so much that we can blame programmers for memory leaks, use-after-free, buffer overflows, and other things that are still common in new C code. At some point, it is the language itself that is unsuitable and insufficient.
Until WG14 ackwnowledges the safety holes in C, isn't fine at all, it should be nuked.
I’m not sure what that has to do with the comment you’re replying to…
C++ explored a lot of ideas that some modern languages borrowed. C++ just had to haul along all the cruft it inherited and built up.
No, C is not fine. It is a really bad language that I unfortunately have to code professionally.
I would rephrase it as, Zig is just Modula-2 with a C like syntax.
That tagline unfortunately turned me off the book, without even starting to read.
I really don't need this kind of self-enlightenment rubbish.
What if I read the whole book and felt no change?
I think I understand SoA just fine.
It is also just such a supremely unziglike thing to state.
Early talks by Andrew explicitly leaned into the notion that "software can be perfect", which is a deviation from how most programmers view software development.
Zig also encourages you to "think like a computer" (also an explicit goal stated by Andrew) even more than C does on modern machines, given things like real vectors instead of relying on auto vectorization, the lack of a standard global allocator, and the lack of implicit buffering on standard io functions.
I would definitely put Zig on the list of languages that made me think about programming differently.
Has it changed how you program in other languages? Because that to me is the true mark of a thought-shifting language.
I'm not sure how what you stated is different from writing highly performance C.
I think it mostly comes down to the standard library guiding you down this path explicitly. The C stdlib is quite outdated and is full of bad design that affects both performance and ergonomics. It certainly doesn't guide you down the path of smart design.
Zig _the language_ barely does any of the heavy lifting on this front. The allocator and io stories are both just stdlib interfaces. Really the language just exists to facilitate the great toolchain and stdlib. From my experience the stdlib seems to make all the right choices, and the only time it doesn't is when the API was quickly created to get things working, but hasn't been revisited since.
A great case study of the stdlib being almost perfect is SinglyLinkedList [1]. Many other languages implement it as a container, but Zig has opted to implement it as an intrusively embedded element. This might confuse a beginner who would expect SinglyLinkedList(T) instead, but it has implications surrounding allocation and it turns out that embedding it gives you a more powerful API. And of course all operations are defined with performance in mind. prepend is given to you since it's cheap, but if you want postpend you have to implement it yourself (it's a one liner, but clearly more expensive to the reader).
Little decisions add up to make the language feel great to use and genuinely impressive for learning new things.
[1] https://ziglang.org/documentation/master/std/#std.SinglyLink...
A nitpick about website: the top progress bar is kind of distracting (high-constrast color with animation). It's also unnecessary because there is already scrollbar on the right side.
A lot of love went into this. It's evident throughout. Great job!
Nah, just a lot of prompting.
As someone who is diving deep into Zig, I’m actually going to evaluate all this (and compare this to Ziglings) or the Zig track on Exercism.
So despite this...
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I just don't buy it. I'm 99% sure this is written by an LLM.
Can the author... Convince me otherwise?
> This journey begins with simplicity—the kind you encounter on the first day. By the end, you will discover a different kind of simplicity: the kind you earn by climbing through complexity and emerging with complete understanding on the other side.
> Welcome to the Zigbook. Your transformation starts now.
...
> You will know where every byte lives in memory, when the compiler executes your code, and what machine instructions your abstractions compile to. No hidden allocations. No mystery overhead. No surprises.
...
> This is not about memorizing syntax. This is about earning mastery.
Pretty clear it's all AI. The @zigbook account only has 1 activity prior to publishing this repo, and that's an issue where they mention "ai has made me too lazy": https://github.com/microsoft/vscode/issues/272725
After reading the first five chapters, I'm leaning this way. Not because of a specific phrase, but because the pacing is way off. It's really strange to start with symbol exporting, then moving to while loops, then moving to slices. It just feels like a strange order. The "how it works" and "key insights" also feel like a GPT summarization. Maybe that's just a writing tic, but the combination of correct grammar with bad pacing isn't something I feel like a human writer has. Either you have neither (due to lack of practice), or both (because when you do a lot of writing you also pick up at least some ability to pace). Could be wrong though.
It's just an odd claim to make when it feels very much like AI generated content + publish the text anonymously. It's obviously possible to write like this without AI, but I can't remember reading something like this that wasn't written by AI.
It doesn't take away from the fact that someone used a bunch of time and effort on this project.
To be clear, I did not dismiss the project or question its value - simply questioned this claim as my experience tells me otherwise and they make a big deal out of it being human written and "No AI" in multiple places.
I agree with you. After reading a couple of the chapters I'd be surprised if this wasn't written by an LLM.
Did they actually spend a bunch of time and effort though? I think you could get an llm to generate the entire thing, website and all.
Check out the sleek looking terminal--there's no ls, cd, it's just an ai hallucination.
I was pretty skeptical too, but it looks legit to me. I've been doing Zig off and on for several years, and have read through the things I feel like I have a good understanding of (though I'm not working on the compiler, contributing to the language, etc.) and they are explained correctly in a logical/thoughtful way. I also work with LLMs a ton at work, and you'd have to spoon-feed the model to get outputs this cohesive.
Pangram[1] flags the introduction as totally AI-written, which I also suspected for the same reasons you did
[1] one of the only AI detectors that actually works, 99.9% accuracy, 0.1% false positive
Keep in mind that pangram flags many hand-written things as AI.
> I just ran excerpts from two unpublished science fiction / speculative fiction short stories through it. Both came back as ai with 99.9% confidence. Both stories were written in 2013.
> I've been doing some extensive testing in the last 24 hours and I can confidently say that I believe the 1 in 10,000 rate is bullshit. I've been an author for over a decade and have dozens of books at hand that I can throw at this from years prior to AI even existing in anywhere close to its current capacity. Most of the time, that content is detected as AI-created, even when it's not.
> Pangram is saying EVERYTHING I have hand written for school is AI. I've had to rewrite my paper four times already and it still says 99.9% AI even though I didn't even use AI for the research.
> I've written an overview of a project plan based on a brief and, after reading an article on AI detection, I thought it would be interesting to run it through AI detection sites to see where my writing winds up. All of them, with the exception of Pangram, flagged the writing as 100% written by a human. Pangram has "99% confidence" of it being written by AI.
I generally don't give startups my contact info, but if folks don't mind doing so, I recommend running pangram on some of their polished hand written stuff.
https://www.reddit.com/r/teachingresources/comments/1icnren/...
How long were the extracts you gave to Pangram? Pangram only has the stated very high accuracy for long-form text covering at least a handful of paragraphs. When I ran this book, I used an entire chapter.
Weird to me that nobody ever posts the actual alleged false positive text in these criticisms
I've yet to see a single real Pangram false positive that was provably published when it says it was, yet plenty such comments claiming they exist
Doesn't mean that the author might not use AI to optimise legibility. You can write stuff yourself and use an LLM to enhance the reading flow. Especially for non-native speakers it is immensely helpful to do so. Doesn't mean that the content is "AI-generated". The essence is still written by a human.
> Doesn't mean that the author might not use AI to optimise legibility.
I agree that there is a difference between entirely LLM-generated, and LLM-reworded. But the statement is unequivocal to me:
> The Zigbook intentionally contains no AI-generated content—it is hand-written
If an LLM was used in any fashion, then this statement is simply a lie.
>If an LLM was used in any fashion, then this statement is simply a lie.
While I don't believe the article was created this way, it's possible to use an LLM purely as a classifier. E.g. prompt along the lines of "Does this paragraph contain any errors? Answer only yes or no." and generate only a single set of token probabilities, without any autoregression. Flag any paragraphs with sufficient probability of "yes" for human review.
Clarity in writing comes mostly from the logical structure of ideas presented. Writing can have grammar/style errors but still be clear. If the structure is bad after translation, then it was bad before translation too.
But then you cannot write that
"The Zigbook intentionally contains no AI-generated content—it is hand-written"
I wish AI had the self-built irony of adding vomit emojis to their sycophantic sentences.
> Can the author... Convince me otherwise?
Not disagreeing with you, but out of interest, how could you be convinced otherwise?
I'm not sure, but I try my best to assume good faith / be optimistic.
This one hit a sore spot b/c many people are putting time and effort into writing things themselves and to claim "no ai use" if it is untrue is not fair.
If the author had a good explanation... Idk not a native English writer and used an LLM to translate and that included the "no LLMs used" call-out and that was translated improperly etc
note that the front page also says: "61 chapters • Project-based • Zero AI"
To me it's another specimen in the "demonstrating personhood" problem that predates LLMs. e.g. Someone replies to you on HN or twitter or wherever, are they a real person worth engaging with? Sometimes it'll literally be a person but their behavior is indistinguishable from a bot, that's their problem. Convincing signs of life include account age, past writing samples, and topic diversity.
Git log / draft history
The sweet irony of this post is that this very post itself is written by an LLM.
I don't think so, I think it's just a pompous style of writing.
You can't just say that a linguistic style "proves" or even "suggests" AI. Remember, AI is just spitting out things its seen before elsewhere. There's plenty of other texts I've seen with this sort of writing style, written long before AI was around.
Can I also ask: so what if it is or it isn't?
While AI slop is infuriating, and the bubble hype is maddening, I'm not sure every time somebody sees some content they don't like the style of we just call out it "must" be AI, and debate if it is or it isn't is not at least as maddening. It feels like all content published now gets debated like this, and I'm definitely not enjoying it.
You can be skeptical of anything but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest that it's generated text.
As to why it matters, doesn't it matter when people lie? Aren't you worried about the veracity of the text if it's not only generated but was presented otherwise? That wouldn't erode your trust that the author reviewed the text and corrected any hallucinations even by an iota?
> but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest ai generated text
Why? Didn't people use such constructions frequently before AI? Some authors probably overused them the same frequency AI does.
I don't think there was very much abuse of "not just A, but B" before ChatGPT. I think that's more of a product of RLHF than the initial training. Very few people wrote with the incredibly overwrought and flowery style of AI, and the English speaking Internet where most of the (English language) training data was sourced from is largely casual, everyday language. I imagine other language communities on the Internet are similar but I wouldn't know.
Don't we all remember 5 years ago? Did you regularly encounter people who write like every followup question was absolutely brilliant and every document was life changing?
I think about why's (poignant) Guide to Ruby [1], a book explicitly about how learning to program is a beautiful experience. And the language is still pedestrian compared to the language in this book. Because most people find writing like that saccharin, and so don't write that way. Even when they're writing poetically.
Regardless, some people born in England can speak French with a French accent. If someone speaks French to you with a French accent, where are you going to guess they were born?
[1] https://poignant.guide/book/chapter-1.html
It's been alleged that a major source of training data for many LLMs was libgen and SciHub - hardly casual.
Even if that were comparable in size to the conversational Internet, how many novels and academic papers have you read that used multiple "not just A, but B" constructions in a single chapter/paper (that were not written by/about AI)?
IMO HN should add a guideline about not insinuating things were written by AI. It degrades the quality of the site similarly to many of the existing rules.
Arguably it would be covered by some of the existing rules, but it's become such a common occurrence that it may need singling out.
What degrades conversation is to lie about something being not AI when it actually is. People pointing out the fraud are right to do so.
One thing I've learned is that comment sections are a vital defense on AI content spreading, because while you might fool some people, it's hard to fool all the people. There have been times I've been fooled by AI only to see in the comments the consensus that it is AI. So now it's my standard practice to check comments to see what others are saying.
If mods put a rule into place that muzzles this community when it comes to alerting others a fraud is being affected, that just makes this place a target for AI scams.
It's 2025, people are going to use technology and its use will spread.
There are intentional communities devoted to stopping the spread of technology, but HN isn't currently one of them. And I've never seen an HN discussion where curiosity was promoted by accusations or insinuations of LLM use.
It seems consistent to me with the rules against low effort snark, sarcasm, insinuating shilling, and ideological battles. I don't personally have a problem with people waging ideological battles about AI, but it does seem contrary to the spirit of the site for so many technical discussions to be derailed so consistently in ways that specifically try to silence a form of expression.
I'm 100% okay with AI spreading. I use it every day. This isn't a matter of an ideological battle against AI, it's a matter of fraudulent misrepresentation. This wouldn't be a discussion if the author themselves hadn't claimed what they had, so I don't see why the community should be barred from calling that out. Why bother having curious discussions about this book when they are blatantly lying about what is presented here? Here's some curiosity: what else are they lying about, and why are they lying about this?
To clarify there is no evidence of any lying or fraud. So far all we have evidence of is HN commenters assuming bad faith and engaging in linguistic phrenology.
There is evidence, it's circumstantial, but there's never going to be 100% proof. And that's the point, that's why community detection is the best weapon we have against such efforts.
(Nitpick: it's actually direct evidence, not circumstantial evidence. I think you mean it isn't conclusive evidence. Circumstantial evidence is evidence that requires an additional inference, like the accused being placed at the scene of the crime implying they may have been the perpetrator. But stylometry doesn't require any additional inference, it's just not foolproof.)
Who cares?
Still better than just nagging.
Using AI to write is one thing, claiming you didn't when you did should be objectionable to everyone.
This.
I wouldn't mind a technical person transparently using AI for doing the writing which isn't necessary their strength, as long as the content itself comes from the author's expertise and the generated writing is thoroughly vetted to make sure there's no hallucinationated misunderstanding in the final text. At the end of the day this would just increase the amount of high quality technical content available, because the set of people with both a good writing skill and a deep technical expertise is much narrower than just the later.
But claiming you didn't use AI when you did breaks all trust between you a your readership and makes the end result pretty much worthless because why read a book if you don't trust the author not to waste your time?
Who wants to be so petty.
I'm sure there are more interesting things to say about this book.
So petty as to lie about using AI or so petty as to call it out? Calling it out doesn't seem petty to me.
I intend to learn Zig when it reaches 1.0 so I was interested in this book. Now that I see it was probably generated by someone who claimed otherwise, I suspect this book would have as much of a chance of hurting my understanding as helping it. So I'll skip it. Does that really sound petty?
[flagged]
I understand being okay with a book being generated (some of the text I published in this manual [1] is generated), I can imagine not caring that the author lied about their use of AI, but I really don't understand the suggestion I write a book about a subject I just told you I'm clueless about. I feel like there's some kind of epistemic nihilism here that I can't fathom. Or maybe you meant it as a barb and it's not that deep? You tell me I guess.
[1] https://maxbondabe.github.io/attempt/intro.html
I would rather care whether there is a book at all and whether it is useful.
> I write a book about a subject I just told you I'm clueless about
Use AI. Even if you use AI, it's still a lot of work. Or write a book about why people shouldn't let AI write their books.
I'm also concerned whether it is useful! That's why I'm not gunnuh read it after receiving a strong contrary indicator (which was less the use of AI than the dishonesty around it). That's also why I try to avoid sounding off on topics I'm not educated in (which is too say, why I'm not writing a book about Zig).
Remember - I am using AI and publishing the results. I just linked you to them!
> I'm also concerned whether it is useful!
So you could do everyone a favour by giving a sufficiently detailed review, possibly with recommendations to the author how to improve the book. Definitely more useful than speculating about the author's integrity.
I'm satisfied with what's been presented here already, and as someone who doesn't know Zig it would take me several weeks (since I would have to learn it first), so that seems like an unreasonable imposition on my time. But feel free to provide one yourself.
Well, there must have been a good reason why you don't like the book. I didn't see good reasons in this whole discussion so far, just a lot of pedantry. No commenter points to technical errors, inaccuracies, poor code examples, or pedagogical problems. The entire objection rests on subjective style preferences and aesthetic nitpicking rather than legitimate quality concerns.
I don't see what else I can say to help you understand. I think we just have very different values and world views and find one another's perspective baffling. Perhaps your preferred AI assistant, if directed to this conversation, could put it in clearer terms than I am able to.
My statement refers to this claim: "I'm 99% sure this is written by an LLM."
The hypocrisy and entitlement mentality that prevails in this discussion is disgusting. My recommendation to the fellow below that he should write a book himself (instead of complaining) was even flagged, demonstrating once again the abuse of this feature to suppress other, completely legitimate opinions.
I'm guessing it was flagged because it came off as snark. I've gone ahead and vouched it but of course I can't guarantee it won't get flagged again. To be frank this comment is probably also going to get flagged for the strong language you're using. I don't think either are abusive uses of flagging.
Additionally please note that I neither complained not expressed an entitlement. The author owes me as much as I owe them (nothing beyond respect and courtesy). I'm just as entitled to express a criticism as they are to publish a book. I suppose you could characterize my criticism as complaints, but I don't see what purpose that really serves other than to turn up the rhetorical temperature.
I don't know any of you. But Zig has opened way big door for system programming for people like me who has never done that before. And, Zig code looks (for a guy comes from curly braces language) easier to understand with really small learning curve.
That statement is honestly self-contradictory. If a draft was AI-generated and then reviewed, edited, and owned by a human contributor, then the parts which survived reviewing and editing verbatim were still AI-generated...
Why do you care, if a human reviewed and edited it, someone filtered it to make sure it’s correct. It’s validated to be correct, that is the main point.
Because it never works like that in practice.
People have the illusion of reviewing and "owning" the final product, but that is not how it looks like from the outside. The quality, the prose style, the errors that pass through due to inevitable AI-induced complacency ALWAYS EVENTUALLY show. If people got out of the AI bubbles they would see it too, alas.
We keep reading the same stories for at least a couple of years now. There is no novelty anymore. The core issues and problems have stayed the same since gpt3.5. And because they are so omnipresent in the internet, we have grown to be able to recognise them almost automatically. It is no longer just a matter of quality, it is an insult to the readers when an author pretends that content is not AI generated just because they "reviewed it". Reviewing sth that somebody else wrote is not ownership, esp when that sth is an LLM.
In any case, I do not care if people want to read or write AI generated books, just don't lie about it being AI generated.
> if a human reviewed and edited it, someone filtered it to make sure it’s correct
Yes.
But it's not “free from AI-generated prose”, so why advertise it as such?
And since the first sentence is a lie, why should we believe the second sentence at all?
This source is really hard to trust. AI or not, the author has done no work to really establish epistemological reliability and transparency. The entire book was published at once with no history, no evidence of the improvement and iteration it takes to create quality work, and no reference as to the creative process or collaborators or anything. And on top of that, the author does not seem to really have any other presence or history in the community. I love Zig, and have wanted more quality learning materials to exist. This, unfortunately, does not seem to be it.
How do you feel about regular books, whose iterations and edits you dont see?
For books that are published in more traditional manners, digital or paper, there is normally a credible publisher, editors, sometimes a foreword from a known figure, reviews from critics or experts in the field, and often a bio about the author explaining who they are and why they wrote the book etc. These different elements are all signals of reliability, they help to convey that the content is more than just fluff around an attention-grabbing title, that it has depth and quality and holds up. The whole publishing business has put massive effort into establishing and building these markers of trust.
Do you have any criticism of the content, or just "I don't know the author"?
They didn't say "this is in error", so they don't need any such example errors. They also didn't say just "I don't know the author".
The book claims it’s not written with the help of AI, but the content seems so blatantly AI-generated that I’m not sure what to conclude, unless the author is the guy OpenAI trained GPT-5 on:
> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
“Not just X - Y” constructions.
> By Chapter 61, you will not just know Zig; you will understand it deeply enough to teach others, contribute to the ecosystem, and build systems that reflect your complete mastery.
More not just X - Y constructions with parallelism.
Even the “not made with AI” banner seems AI generated! Note the 3 item parallelism.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I don’t have anything against AI generated content. I’m just confused what’s going on here!
EDIT: after scanning the contents of the book itself I don’t believe it’s AI generated - perhaps it’s just the intro?
EDIT again: no, I’ve swung back to the camp of mostly AI generated. I would believe it if you told me the author wrote it by hand and then used AI to trim the style, but “no AI” seems hard to believe. The flow charts in particular stand out like a sore thumb - they just don’t have the kind of content a human would put in flow charts.
Every time I read things like this, it makes me think that AI was trained off of me. Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are all examples of how they teach to write essays in high school and college. They're also all examples of how I think and have learned to communicate.
I'm not sure what to make of that either.
To be explicit, it’s not general hallmarks of good writing. It’s exactly two common constructions: not X but Y, and 3 items in parallel. These two pop up in extreme disproportion to normal “good writing”. Good writers know to save these tricks for when they really want to make a point.
Most people aren’t great writers, though (including myself). I’d guess that if people find the “not X but Y” compelling, they’ll overuse it. Overusing some stylistic element is such a normal writing “mistake”. Unless they’re an extremely good writer with lots of tools in their toolbox. But that’s not most people.
I find the probability that a particular writer latches onto the exact same patterns that AI latches onto, and does not latch onto any of the patterns AI does not latch onto, to be quite low. Is it a 100% smoking gun? No. But it’s suspicious.
Interesting, I'll have to look for those.
But you didn't write that "Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are not just examples of how they teach to write essays in high school and college; they're also all examples of how I think and have learned to communicate."
I mean maybe the content is not AI generated (I wouldn’t say it is) but the website does have an AI generated smell to it. From the colors to the shapes, it looks like Sonnet or Opus definitely made some tweaks.
Clearly your perception of what is AI generated is wrong. You can't tell something is AI generated only because it uses "not just X - Y" constructions. I mean, the reason AI text often uses it is because it's common in the training material. So of course you're going to see it everywhere.
Find me some text from pre-AI that uses so many of these constructions in such close proximity if it’s really so easy - I don’t think you’ll have much luck. Good authors have many tactics in their rhetorical bag of tricks. They don’t just keep using the same one over and over.
The style of marketing material was becoming SO heavily cargo-culted with telltale signs exactly like these in the leadup to LLMs.
Humans were learning the same patterns off each other. Such style advice has been floating around on e.g. LinkedIn for a while now. Just a couple years later, humans are (predictably) still doing it, even if the LLMs are now too.
We should be giving each other a bit of break. I'd personally be offended if someone thought I was a clanker.
You’re completely right, but blogs on the internet are almost entirely not written by great authors. So that’s of no use when checking if something is AI generated.
I sent the text through an AI detector with 0.1% false positive rate and it was highly confident the Zig book introduction was fully AI-written
> No hidden control flow: Zig has no hidden allocators, goroutines…
Neither of those things are control flow, and yet again I’m reading a pro-Zig text taking a dig at Go without any substance to the criticism.
Also funny having a dig at goroutines when Zig is all over the place with its async implementation.
Folks! do I really need learn Zig? I am already good with Rust!!
Yeah, you should. Zig is a trending language right now, and in the coming years many projects are likely to be rewritten in Zig instead of Rust (often referred to as "riiz").
So, a new JS linter written in Zig, when?
is it even required?
Is this sarcasm? if yes I got your joke otherwise please enlighten me _/\_
I don't think you need to learn anything! Especially if you like Rust and it works for your projects.
Not an expert but Zig seems like a modern C - you manage memory yourself. I guess if you want more modern features than C offers, and actively don't want the type-system sort of features that Zig has (or are grumpy about compile times, etc) then it's there for you to try!
Bar C if you're into system's programming there's no language you *need* to learn.
Partially agree on this, std lib/crates and ease of use do make a difference (this is not even the main reason to use Rust), though Rust certainly has its own headaches. (Imagine searching for someone's implementation of HashedMap on github or using dedicated packages like glib, when you get it easily at crates.io). Again this is subjective based on use cases.
I wonder if HN should mandate an [AI] tag for links to AI generated content.
It should be against HN policy to link to AI generated content just as it's against HN policy to post AI generated comments.
Futile as it is.
But can we train AI on this beautifully hand-crafted material, and ask it later to rewrite Rust with Zig? :]
wow, it's so cool.
Using Next.js and Tailwind-bloat feels contradictory to Zig.
For me, personally, any new language needs to have a "why." If a new language can't convince me in 1-2 sentences why I need to learn it and how it's going to improve software development, as a whole, it's 99% bs and not worth my time.
DHH does a great job of clarifying this during his podcast with Lex Friedman. The "why" is immediately clear and one can decide for themselves if it's what they're looking for. I have not yet seen a "why" for Zig.
For many languages I agree, especially languages with steep learning curves (e.g. Rust, Haskell). But zig is dead fast to learn so I'd recommend just nipping through Ziglings and seeing if its a language you want to add to the toolbox. It took me only about 10 hours to pick up and get used to and it has immediately replaced C and C++ in my personal projects. It's really just a safer, more ergonomic C. If you already love C, I maybe wouldn't bother.
Hmmm what about this: https://ziglang.org/learn/why_zig_rust_d_cpp/
Convincing enough?
I'm a C/C++ developer. I write production code in MQL5 (C-like) and Go, and I use Python for research and Automation. I can work with other languages as well, but I keep asking myself: why should I learn Zig?
If I want to do system or network programming, my current stack already covers those needs — and adding Rust would probably make it even more future-proof. But Zig? This is a genuine question, because the "Zig book" doesn’t give me much insight into what are the real use cases for Zig.
If you're doing it for real-world values, keep doing that. But if you want traction, writing in a "fancy" language is almost a requirement. "A database engine written in Zig" or "A search engine written in Zig" sounds much flashier and guarantees attention. Look at this book: it is defintely an AI slop, but it stays at the top spot, and there's barely any discussion about the language itself.
Enough rant, now back on some reasons for why choosing Zig:
For my personal usage, I'm working on replacing Docker builds for some Go projects that rely heavily on CGO by using `zig cc`. I'm not using the Zig language itself, but this could be considered one of its use cases.> For my personal usage, I'm working on replacing Docker builds for some Go projects that rely heavily on CGO by using `zig cc`. I'm not using the Zig language itself, but this could be considered one of its use cases.
Hm, i can see a good use case when we want to have reproducible builds from go packages, including its C extensions. Is that your use case, or are you aiming for multi-environment support of your compiled "CGO extensions"
My take on this as someone that professionally coded in C, C++, Go, Rust, Python (and former darlings of the past) is that Zig gives you the sort of control that C does with enough niceties as to not break into other idioms like C++ and Rust does in terms of complexity. Rust "breaks" on some low level stuff when you need to deal with unsafe (another idiom) or when you need to rely on proc-macros to have a component system like Bevy does. Nothing wrong with this, is just that is hard to cover all the ground. The same happens with C++, having to grow to adapt to cover a lot of ground it ended up with lots of features and also with some complexity burden.
In my experience with Zig, you have the feeling of thinking more about systems engineering using the language to help you implement that without resorting to all sort of language idioms and complexity. It feels more intuitive in way giving it tries to stay simple and get out of your way. Its a more "unsurprising" programming language in terms of what you end up getting after you code into it, in terms of understanding exactly how the code will run.
In terms of ecosystem, lets say you have Java lunch, C lunch and C++ lunch (established languages) in their domains. Go is eating some Java(C#, etc..) lunch and in smaller domains some C++ lunch. Rust is in the same heavy weight category as Go, but it can eat more C++ lunch than Go ever could.
Now Zig will be able to compete in ways that it can really be an alternative to C core values, which other programming languages failed to achieve. So it will be aimed at things C and C++ are doing now and where Go and Rust wont be good candidates.
If you used Rust long enough you can see that while it can cover almost all ground its not a good fit for lower level stuff or at least not without some compromises either in performance or complexity (affecting productivity). So its more in the same family as C++ in terms of what you pay for (again nothing wrong with that, is just that some complex codebases will need a good amount of man-hours effort in the same line as C++ does).
Don't get me wrong, Rust can be good at low level stuff too, is just that some of its choices make you as a developer pay a price for those niceties when you need to get your hands dirty in specific domains.
With Zig you fell more focused on the machine with less abstractions as in C but with enough goodies that can make even the most die-hard C developer think about using it (something C++ and Rust never managed to do it).
So i think Zig will have its place in the sun as Rust does. But I see Rust taking more the place where Java used to be (together with Go) + some things that were made in C++ where Zig will be more focused on system and low level stuff.
Modern C++ will still be around, but Rust and Zig will used more and more where languages like C and C++ used to be the only real contenders, which is quite good in my POV.
What will happen is that Rust and Zig programmers might overlap and offer tools in the same area (see Bun and Deno for instance) but the tools will excel on their own way and with time it will be more clear into which domain Rust and Zig are better at.
It was very hard to find a link to the table of contents… then I tried opening it and the link didn’t work. I’m on iOS. I’d have loved to take a look quickly what’s in the book…
I found it, maybe it was there all along. Go to chapters, then open the side menu with the hamburger button.
https://github.com/zigbook/zigbook/tree/main/pages
Haha the fucking garbage. Before AI, before the internet, this overexaggerated, hokey prose was written by scummy humans and it came exclusively in porn magazines along with the x-ray specs and sea-monkey fishtanks.
inb4 people start putting a standardized “not AI generated” symbol in website headers
Some text is unreadable because it is so small.
Why do we need another language?
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I think it's time to have a badge for non LLM content, and avoid the rest.
There is also Brainmade: https://brainmade.org/
What's stopping AI made content from including this as well?
I imagine it's kind of like "What's stopping someone from forging your signature on almost any document?" The point is less that it's hard to fake, and more that it's a line you're crossing where everyone agrees you can't say "oops I didn't know I wasn't supposed to do that."
The name seems odd to me, because I think it's fine to describe things as a digital brain, especially when the word brain doesn't only apply to humans but to organisms as simple as a 959 cell roundworm with 302 neurons.
Also the logo seems to imply a plant has taken over this person and the content was made by some sort of body-snatched pod person.
It’s cordyceps :-D
If this gets any traction, AI bros on Twitter will put it on their generated images just out of spite.
There seems to be https://notbyai.fyi/ and https://no-ai-icon.com/ ..!
I like these ones:
https://cadence.moe/blog/2024-10-05-created-by-a-human-badge...
Even for content that isn’t directly composed by llm, I bet there’d be value in an alerting system that could ingest your docs and code+commits and flag places where behaviour referenced by docs has changed and may need to be updated.
This kind of “workflow” llm use has the potential to deliver a lot of value even to a scenario where the final product is human-composed.
> Most programming languages hide complexity from you—they abstract away memory management, mask control flow with implicit operations, and shield you from the machine beneath. This feels simple at first, but eventually you hit a wall. You need to understand why something is slow, where a crash happened, or how to squeeze every ounce of performance from your hardware. Suddenly, the abstractions that helped you get started are now in your way.
> Zig takes a different path. It reveals complexity—and then gives you the tools to master it.
> This book will take you from Hello, world! to building systems that cross-compile to any platform, manage memory with surgical precision, and generate code at compile time. You will learn not just how Zig works, but why it works the way it does. Every allocation will be explicit. Every control path will be visible. Every abstraction will be precise, not vague.
But sadly people like the prompter of this book will lie and pretend to have written things themselves that they did not. First three paragraphs by the way, and a bingo for every sign of AI.
These posts are getting old.
I had a discussion on some other submission a couple of weeks back, where several people were arguing "it's obviously AI generated" (the style btw was completely different to this, quite a few explicitives...). When I put the the text in 5 random AI detectors the argument who except for one (which said mixed, 10% AI or so) all said 100% human I was being down voted and the argument became "AI detection tools can detect AI" but somehow the people claim there are 100% clear telltale signs which says it's AI (why those detection tools can detect them is baffling to me).
I have the feeling that the whole "it's AI" stick has become a synonym for I don't like this writing style.
It really does not add to the discussion. If people would post immediately "there's spelling mistakes this is rubbish", they would rightfully get down voted, but somehow saying "it's AI" is acceptable. Would the book be any more or less useful if somebody used AI for writing it? So what is your point?
Check out the other examples presented in this thread or read some of the chapters. I'm pretty sure the author used LLMs to generate at least parts of this text. In this case this would be particularly outrageous since the author explicitly advertizes the content as 100% handwritten.
> Would the book be any more or less useful if somebody used AI for writing it?
Personally, I don't want to read AI generated texts. I would appreciate if people were upfront about their LLM usage. At the very least they shouldn't lie about it.
I ran the introduction chapter through Pangram [1], which is one of the most reliable AI-generated text classifiers out there [2] (with a benchmarked accuracy of 99.85% over long-form text), and it gives high confidence for it having been AI-generated. It's also very intuitively obvious if you play a lot with LLMs.
I have no problem at all reading AI-generated content if it's good, but I don't appreciate dishonesty.
[1]: https://www.pangram.com/ [2]: https://arxiv.org/pdf/2402.14873
Right in those same first few paragraphs... "...hiding something from you. Because they are."
Would most LLMs have written that invalid fragment sentence "Because they are." ?
I don't think you have enough to go on to make this accusation.
Yes, that fragment in particular screams LLM to me. It's the exact kind of meaningless yet overly dramatic slop that LLMs love
The em dashes?
There's also the classic “it's not just X, it's Y”, adjective overuse, rule of 3, total nonsense (manage memory with surgical precision? what does that mean?), etc. One of these is excusable, but text entirely comprised of AI indicators is either deliberately written to mimic AI style, or the product of AI.
"not just x but y" is definitely a tell tale AI marker. But, people can write that as well. Also our writing styles can be influenced as we've seen so much AI content.
Anyway, if someone says they didn't use AI, I would personally give them the benefit of the doubt for a while at least.
this construction is familiar to anyone who has taken a course on writing post middle or high school.
The formal version is "not only... but also" https://dictionary.cambridge.org/us/grammar/british-grammar/..., which I personally use regularly but I often write formally even in informal settings.
"not just... but" is just the less formal version.
Google ngrams shows the "not just ... but" construction has a sharp increase starting in 2000. https://books.google.com/ngrams/graph?content=not+just+*+but...
Same with "not only ... but also" https://books.google.com/ngrams/graph?content=not+only+*+but...
Like many scholarly linguistic construction, this is one many of us saw in latin class with non solum ... sed etium or non modo ... sed etium: https://issuu.com/uteplib/docs/latin_grammar/234. I didn't take ancient Greek, but I wouldn't be surprised if there's also a version there.
More info
- https://www.phrasemix.com/phrases/not-just-something-but-som...
- https://www.merriam-webster.com/dictionary/not%20just
- https://www.grammarly.com/blog/writing-techniques/parallelis...
- https://www.crockford.com/style.html
- https://englishan.com/correlative-conjunctions-definition-ru...
Meh. I mean, who's it for? People should be adopting the stance that everything is AI on the internet and make decisions from there. If you start trusting people telling you that they're not using AI, you're setting yourself up to be conned.
Edit: So I wrote this before I read the rest of the thread where everyone is pointing out this is indeed probably AI, so right of the bat the "AI-free" label is conning people.
I guess now the trend is Zig. The era of Javascript framework has come to end. After that was AI tend. And now we have Zig and its allocators, especially the arena allocator.
/S
[flagged]
[flagged]
What is it with HN and the "oh, I thought {NAME} is the totally different tool {NAME}" comments? Is it some inside joke?
Or just incredulity that people naming a technology are ignorant of the fact that another well-known technology is already using it.
¯\_(ツ)_/¯
One is Zig the other is Zigbee, I don't understand your comment...
https://stockshed.com/products/t3542-zig-2-4-ghz-wireless-me...
I think ZigBee uses Zig. It's an implementation.
The page you've linked is very confusing, but as far as I can tell that's a Zigbee device that the manufacturer (Tensor plc) consistently describes as a "Zig" device. I have no idea why, it's bizarre.
- This thesis [1] identifies a product in this family as a Zigbee device. It's on the 80th page (numbered 62). Elsewhere it's referred to as a Zig device.
- I can't find anyone else claiming to make Zig devices or any references to a Zig protocol outside of this one manufacturer and their distributors.
- The manufacturer makes a lot of weird typos. They variously say these devices operate at 2.4GHz, 2.4MHz, and 2.4Mhz.
- There's nothing about a Zig protocol on the Zigbee Wikipedia page.
[1] https://theses.ncl.ac.uk/jspui/bitstream/10443/4329/1/Liang%...
[flagged]
Username definitely doesn't check out on this comment. Please try again.
[flagged]
There's an actual production grade database written in zig: https://tigerbeetle.com/
ghostty and bun aren't real world enough for you?
They're not. It's real world when there's a market for paying Zig jobs, not when you can list a few github repos that use it.
Simple: your priors are wrong. People use Zig.
Even if what you say is true, people make bets on new tech all the time. You show up early so you can capture mindshare. If Zig becomes mainstream then this could be the standard book that everyone recommends. Not just that, it’s more likely the language succeeds if it has good learning materials - that’s an outcome the author would love.
> people make bets on new tech all the time. You show up early so you can capture mindshare.
I got on the ground floor with elixir. got my startup built on it. now we have 3 fulltime engineers working on elixir fulltime. None of that would have happenned if I looked at a young language and said "its not used in the real world"
"nobody uses in the real world yet" is uncharitable, as Zig is used in many real-world projects (Bun and Tigerbeetle are written in Zig, for example). But there's value being at the forefront of technologies that you think are going to explode soon, so that's how people find time and energy, I guess.
there's no way someone made this for free, where do I donate? im gonna get so much value from this this feels like stealing
It's AI-written FWIW
though maybe AI is getting to the point it can do stuff like this somewhat decently
Dang duped again
The first page says none of the book was written by AI
Yes, it's a false claim
how do you know this? let us know please, thanks. edit, I see you used this to check: https://news.ycombinator.com/item?id=45948220
pangram.com, the most accurate and lowest false positive AI detector
https://www.pangram.com/blog/third-party-pangram-evals
Why does this feel like an ad? I've seen pangram mentioned a few times now, always with that tagline. It feels like a marketing department skulking around comments.
> Why does this feel like an ad?
Because it’s written like a tagline instead of like a sentence people would say to each other.
The other pangram mention elsewhere in this comment section is also me -- I'm totally unaffiliated with them, just a fan of their tool
I specify the accuracy and false positive rate because otherwise skeptics in comment sections might otherwise think it's one of the plethora of other AI detection tools that don't really work
FWIW I work on AI and I also trust Pangram quite a lot (though exclusively on long-form text spanning at least 4 or more paragraphs). I'm pretty sure the book is heavily AI written.
SAME. I was looking for a donation button myself! I've paid for worse quality instructional material. this is just the sort of thing I'm happy to support
Need this but to learn AI