[Daniel's week] March 13, 2026

Daniel Stenberg daniel at haxx.se
Fri Mar 13 23:32:56 CET 2026


Hello friends

We're at another weekend. Time for a Daniel email!

## missed last week!

I took that trip to Oslo and the NDC Security conference last week where I
talked about curl and had a good time, but it gave me a really busy end of week
and that made me not prioritize getting my email out. So there was no email
last week. Maybe this week's version will compensate.

## Bug-Bounty: False

Someone on Mastodon made me aware of newly registered keywords for the
.well-known/security.txt: Bug-Bounty, so now we actually clearly announce even
there that we have no bug bounty.

I think the message has been read and understood. Not a single security
reporter has since questioned the lack of reward or even brought it up. We
make an effort to make it really clear, and to mention it clearly several
times through the process.

After just a month and a half having passed without a bounty and us switching
to GitHub and then back to Hackerone, I think it is way too early to draw any
definite conclusions about the change. We can see that we still get plenty
security reports submitted [1]. The AI slop frequently *may* have dropped.

## zip bombs

A known challenge with automatic decompression in the way curl offers it, is
zip bombs [2]. The fact that a tiny amount of source data can get decompressed
and "explode" up to enormous amounts.

We get this reported as a security problem every now and then and for that
reason we explicitly document this risk and that users need to be aware. Maybe
only use this feature with trusted servers (over secured TLS) even.

In a recent iteration of this discussion we finally decided that coming up in
the next release (8.20.0) we are going to apply the maximum file size limit
also for the expanded data size [3]. It has so far only limited the amount of
data downloaded, but we think users in general actually already believe that
this option works for this as well so we might as well do it. When we ship
that curl version in the end of April 2026, we might finally put an end to the
lengthy series of zip bomb reports.

## "just"

For a long time we have banned the word "very" from curl documentation. This
originated when I created the everything curl [4] book and used tools to point
out my English mistakes and how to enhance the language I used. They often
highlighted how using the word "very" is to be avoided and rather be rephrased
into using a more nuanced term. By banning this word, we force the author of
the documentation to write better. To be more specific.

Recent weeks we have similarly started to ban other "filler words". Words that
in English documentation should be avoided since they don't add value and
mostly are words added out of habit with no real value. Words like "just",
"simply", "basically" and "however". To make the documentation more precise
and more consistent.

## badwords

We have this script in curl to find "bad words" in docs and source comments:
badwords [5]. "Bad" as in they should be rephrased or avoided. Words I just
mentioned above, like "very" or contractions like "you're". These words are
not bad in a universal sense, but they are considered bad by us for use in
curl documentation. This script has been running in CI for years to verify
that we only merge correct language. The script use has also expanded so
nowadays it also verifies source code comments.

The other day I decided I should make it easier to run this script locally so
that it is easier to verify the language myself before I create my PR and
waste CI and potentially someone else's time.

Running it locally, I realized it took 48 seconds to complete on my decently
fast machine. That was annoyingly slow, to the degree that I first considered
adding a progress bar or something to show that something is still happening.

I also took a look at the code to see what I could do to maybe make it run a
little faster. 48 seconds can feel like a long time when waiting to get a
thumb up.

I realized I could join a few regexes into single ones instead of doing loops
and suddenly the badwords script only took 8 seconds to run. Happy with this
6x speedup I merged my work.

Stefan Eissing then gave the script an extra look and with just some small
tweaks he more than doubled execution speed and with his help it completed in
3.6 for me.

Inspired, I went back, took a new look and realized I left out one regex from
my earlier "squeeze". By merging the whitelisting logic into a single huge
regex instead of a loop, the script could complete in 1.1 seconds.

Speed is not everything. As I mentioned above, the script is also used to
verify language in source code. The challenge there is that source code is
really code + comments + strings and the way we used the script it ran over
all that content and thus if we would ban a word it could not be used as a
variable in code. Not ideal. We solved that for a long time by kludgy
work-arounds like whitelisting those specific words for source code scans etc.

In order to increase accuracy in word scanning for source code, I wrote up a
new stand-alone tool, c-comments [6], which reads a C source code as input and
outputs all its comments and double quoted strings, making everything else
blanked out. Using this filter, variable and functions names in source code
are not checked, only the parts we want checked. We could remove some kludges.

This extra filtering and much increased accuracy added a little execution time
and it now completes in 1.7 seconds. There is probably something more we can
do to gain a little more speed.

## release

In the two previous curl releases we managed to fix around 400 bugs per
release. This was I believe primarily because of the sudden boost in new
findnings and bug reports generated with the new wave of AI powered code
analyzers. I say wave, because I believe that similar to previous times
through the years when a new tool and a new development comes, that new set
suddenly finds new things and forces to raise the bar and to improve to the
next level. We saw that happen when we started running good code analyzers on
the code 15 years ago. We saw it happen when we started fuzzing the code 10
years ago.

Eventually we have fixed most of the issues these tools can find and life and
reality slowly return to normal. With intense and constant development we of
course are not immune to introducing new problems as we go along, but at least
the AI analyzer powerhouses Aisle, Zeropath and Codex Security seem to have
now found and reported most of the problems they could find in the existing
source code.

We shipped curl 8.19.0 [7] this Wednesday with "only" 264 bugfixes. Some of
the bugs lead to security vulnerabilities and so we announced four separate
advisories [8] and their corresponding CVEs this time. At least one of them
were found with an AI tool. One of them was a C mistake, the other three
weren't.

## curl record

CVE-2026-3784 [9] beat a new curl record: the oldest vulnerability found in
curl ever. This flaw existed in curl source code for 24.97 years before it was
discovered.

It is hard to explain why or how a vulnerability can survive for so long.
Especially after you have been told about it as in hindsight it always seems
to strange that it wasn't noticed or fixed earlier.

It is for a special use case that few users do. It is subtle. It is also only
severity low. It is not an end of the world flaw. curl has hundreds of options
that can be used in countless combinations. Security is hard.

Some bugs remain shallow no matter the number of eyeballs.

## NTLM

We have decided to make NTLM support opt-in rather than enabled by default in
the next curl version. This is a strange Microsoft-invented paradigm-breaking
HTTP authentication method that has weak security properties. It does not work
for HTTP/2 and HTTP/3 and it has been deprecated by Microsoft themselves
already for several years.

There's patch floating on the wget mailing list to drop NTLM support there as
well.

## SMB

curl only supports version 1 of this protocol. Hardly anyone uses it and it
has weak security properties. It uses NTLM internally so when switching off
NTLM, this protocol is also switched off.

We still decided to make SMB opt-in separately from NTLM so starting in curl
8.20.0 users need to enable this protocol in their build to get support for
it.

## 10K downloads

The Linux Foundation, the organization that we want to love but that so often
makes that a hard bargain, has created something they call “Insights” [10].

It is a fancy-looking site with graphs and tables with numbers about Open
Source projects. But yikes how misleading. It was so wrong and bad that I
decided to only focus on a single funny detail in my blog post about it [11]:
the number of downloads.

## dependency tracking

I happened to look at the repositories that depend on curl tab on GitHub and
it was so hilarious I had to write a blog post about (the lack of) dependency
tracking in the real world [12].

## nuget

As we passed the three-year anniversary of my complaint on nuget in 2023, I
decided to revisit this corner of the web again and see how the land lays
these days. It is equally bad and they did not learn anything. So I wrote
another blog post [13] about their service that will change nothing.

## Coming up

- Monday: if no major bug has been reported, there is no patch release
- Thursday: 28 years since the first ever curl release

## Links

[1] = https://curl.se/dashboard1.html#hackerone
[2] = https://en.wikipedia.org/wiki/Zip_bomb
[3] = https://github.com/curl/curl/pull/20787
[4] = https://everything.curl.dev/
[5] = https://github.com/curl/curl/blob/master/scripts/badwords
[6] = https://github.com/bagder/c-comments
[7] = https://daniel.haxx.se/blog/2026/03/11/curl-8-19-0/
[8] = https://curl.se/docs/security.html
[9] = https://curl.se/docs/CVE-2026-3784.html
[10] = https://insights.linuxfoundation.org/
[11] = https://daniel.haxx.se/blog/2026/03/09/10k-curl-downloads-per-year/
[12] = https://daniel.haxx.se/blog/2026/03/10/dependency-tracking-is-hard/
[13] = https://daniel.haxx.se/blog/2026/03/12/chicken-nuget/

-- 

  / daniel.haxx.se


More information about the daniel mailing list