[Daniel's week] April 19, 2024

Daniel Stenberg daniel at haxx.se
Fri Apr 19 23:48:45 CEST 2024


Hello!

Another week.

## vitriol

Recently, maybe the last year or so, I have noticed this slightly unfortunate
trend on my blog. I write new posts at a rate of maybe a few per week or month
and if I would explain them I would say that they are always techy. About open
source, development and networking but most of all about various things curl.
Things that occupy my mind.

The trend is simply that I have managed to attract several readers who
regularly come back to submit nasty comments on my posted articles. There
seems to be at least two different individuals who now comment on just about
every new post, with either just brief dismissive comments about how no one
cares what I write or longer dicusssion pieces in which they basically shoot
me down in various elaborate ways because "curl is just the hobby of some guy
that has no business providing a service to a billion people." [1]

Fortunately I am thick-skinned eough and I have plenty of friends that make me
not care much about these incidents, and it is usually not too much work to
just delete this kind of offensive comments before they are even displayed on
the site. It is just curious how this came to happen and that these guys (I
assume they are men) continuously come back to haunt me even when I clearly
don't provide content they want to read... Why oh why?

As I have said before: software is easy, humans are difficult.

## ECH merge

This week I merged the ECH (Encrypted Client Hello) PR[2] into curl's master
repository. This is work that has been stewing for several years.

ECH is the next step in encrypting more data in the TLS handshake that
otherwise would appear in clear text and therfore be readable by anyone who is
monitoring the wire. The most widely discussed entry is perhaps the SNI field
which typically holds the host name of the target server/service. (ECH was
once known as ESNI.)

In order to know how to send the encrypted fields, a client needs to the keys
for the target host. They are provided in the DNS record called HTTPS. Since
there is no convenient standard libc calls to get such data for a given host
name a first complication is how to add that ability into curl. getaddrinfo()
just is not good enough for this. In the current ECH work, the DoH
functionality has been extended to also manage HTTPS records. It thus has the
downside that a user needs to use DoH for ECH to be possible. We of course
want to widen this going forward, but it is not entirely straight forward on
how that is going to be done. If you have ideas, let us know.

As second complication is the TLS support. Yes, another protocol feature that
depends on features in TLS libraries that are not there, early or provided via
external patches. BoringSSL and wolfSSL support the ECH functionality already,
and there is a rather huge external OpenSSL patch [3], but that's it. I don't
think I need to explain how this of course will delay how users can and will
access and use this feature. In curl and elsewhere. You can rest assured that
you will hear more about from me this going forward.

Both Chrome and Firefox support ECH already. Chrome can do it without DoH,
Firefox requires DoH to be used.

Oh, and I should probably emphasize that ECH support in curl is labeled
EXPERIMENTAL. It needs to be opted in in the build and we discourage everyone
from using it in production: we might change things as we go forward.

## timeouts

libcurl supports doing parallel transfers without any restrictions on the
amount of concurrency. One of the trickiest things in debugging curl is when a
user reports problems in the style of "when I have 200 tranfers in progress
some of them [fill in problem]". That's exactly what I worked on earlier this
week.

In this case it turned out that when doing a large amount of transfers and
many of them would timeout, some of them timed out too slowly and lingered
around for too long.

Finding the patterns among the hundreds of separate transfers is of course the
key, and then ideally to gradually be able to reduce the concurrency or
increase the debugging output to understand what is going on.

Last week I realized one part that only made it slightly better and after
having had the problem bounce around like crazy in my head over the weekend,
an idea suddenly dawned on me and after some basic testing I could verify that
my theory was correct: the code was erroneously restarting the connect timeout
at times, thus delaying when it expired. Such a great relief to have found and
then eventually fix it. Hundreds of transfers can now timeout much more
accurately than before.

## hyper

Is the weight of carrying the hyper support worth it or is it time to drop
support for it? Does anyone actually want to use curl with the hyper backend?
I brought the question to the mailing list [6] and Mastodon this week and I
have gotten a few mixed replies. There is clearly no strong desire or support,
but I think there are signs that tell us that we can probably give it all some
more time to see if maybe things can develop.

No decision has been made. With curl up coming up in just two weeks I figure
we can discuss it proper there before we make a real decision. Or maybe we
decide to not make a decision now and just postpone it. I will get back on
this.

## Cisco fix

On a lighter note, in an IRC discussion I was reminded of the great Cisco fix
of 2019 when they fixed a secutity flaw in their vulnerable RV320/325 routers
by simply making sure that any HTTP requests done with the user agent "curl"
would be rejected with a 403 response [4]. This, most likely, because the
reports showing off how to run the exploits were mostly demonstrated using
curl.

As a special fun twist: when I posted the same image on LinkedIn [5] and
watched the "analytics" for it a few days later. The third most common
employer amount the viewers of that post was... Cisco.

## CodeSonar

I was contacted by the good peeps running the CodeSonar static code analyzer,
and they have graciously fired up daily scans of the curl source code.

Normally when new code analyzers show up, which they do every once in a while,
they rarely contribute much news since we already frequently scan the code
using multiple tools, but CodeSonar turned out to have a few aces up its
sleeves and identified serveral flaws that none of the other ones has. I am
impressed and I have started to address the nits it reports.

## curl up

A huge box of curl mugs showed up [7].

## Coming up

- get started on my curl up presentations
- final week before curl feature freeze

## Links

[1] = https://daniel.haxx.se/blog/2024/03/08/the-apple-curl-security-incident-12604/comment-page-1/#comment-26945
[2] = https://github.com/curl/curl/pull/11922
[3] = https://github.com/sftcd/openssl
[4] = https://mastodon.social/@bagder/112274613782712427
[5] = https://www.linkedin.com/posts/danielstenberg_curl-activity-7185597818894512130-kHFS
[6] = https://curl.se/mail/lib-2024-04/0021.html
[7] = https://mastodon.social/@bagder/112297038163606814

-- 

  / daniel.haxx.se


More information about the daniel mailing list