kace91 a day ago

(Let me start clarifying that this is not at all a criticism of the author)

I am usually amused by the way really competent people judge other's context.

This post assumes understanding of:

- emacs (what it is, and terminology like buffers)

- strace

- linux directories and "everything is a file"

- environment variables

- grep and similar

- what git is

- the fact that 'git whatever' works to run a custom script if git-whatever exists in the path (this one was a TIL for me!)

- irc

- CVEs

- dynamic loaders

- file priviledges

but then feels important to explain to the audience that:

>A socket is a facility that enables interprocess communication

  • ericmcer a day ago

    That feels like part of why some juniors are so confident while more senior engineers are plagued with self-doubt.

    Juniors know how much they have learned whereas a 10+ year senior (like the author) forget most people don't know all this stuff intuitively.

    I still will say stuff like "yeah it's just a string" forgetting everyone else thinks a "string" is a bit of thread/cord.

    • aorloff a day ago

      Well coached juniors run through brick walls

      • TeMPOraL 19 hours ago

        Scrapping them off the wall is not pleasant, though. But throw enough of them at it, I guess the wall eventually goes down too.

    • kragen a day ago

      I forget that people think strings are different from sequences of bytes.

      • sfink 19 hours ago

        I didn't know that at some point, then I knew that and found it obvious, and now I don't know it again.

        Strings are very very not sequences of bytes. Strings are a semantic thing. There may be a sequence of bytes in some representation of a particular string, but even then those bytes are not enough to define a string without other stuff. An encoding, at the very least. But even then, there are many things that could be described as a "string". A sequence of code points, perhaps? Or scalar values? Grapheme clusters?

        Not to mention that you may not even have a linear sequence of bytes at the bottom level. You might have a rope (cons cell), or an intern pointer, or...

        • kragen 15 hours ago

          This is a profoundly stupid kind of argument. There isn't even an objective truth you could conceivably convince someone of. There's just how you're choosing to use the word in conflict with a preexisting convention, which marks you as part of some social group, just like "this slaps", "skibidi", "rad", or "whenever". The preexisting convention isn't some apprehension of objective truth either. It's just an arbitrary tradition, like the meaning of any word.

          People who are using the word in the older sense are usually not mistaken. At worst, they're your political enemies, but often they aren't even that; they just have experiences you don't. Attempting to persuade them, as you are doing, can only have the effect of further narrowing your intellectual horizons—even in the unlikely case that you are successful, but especially in the far more common case where they try to avoid you after that.

          I recommend more curiosity and less crusading.

          (In the rare case where someone is mistaken, it's sufficient to say "I meant a Unicode string" or "but we're iterating over codepoints, not bytes," but such mere clarification is not what you're up to.)

      • EdNutting 15 hours ago

        Check out both how Python strings are implemented and the string type’s semantics in the language.

        Strings are sequences of bytes only in the sense that everything stored in memory is a sequence of bytes. The semantics matter far more, and they aren’t the same as a sequence of bytes.

        Also many languages make strings immutable and byte arrays mutable.

        • kragen 15 hours ago

          I think it's bad to attempt to redefine an established term in this way, but anyway people who use that established definition are not merely fools who lack your wisdom; see https://news.ycombinator.com/item?id=46086919.

          • EdNutting 7 hours ago

            > I recommend more curiosity and less crusading.

            That is wonderfully ironic.

            Anyway, coming from a C background, sure, strings are kind of just sequences of bytes. For people coming from other backgrounds, they'll have different understandings of what a string is (probably more based on semantics of the language they learnt first than on the underlying representation in memory). I'm not trying to persuade you of one definition or another. Nor am I redefining the meaning of a string, as it's clearly subjective by background and/or by context.

            To that end, take my point as merely "you need to know the context", and I happen to believe the context that matters is the semantics of the programming language you're using (as opposed to the underlying representation of an instance of the type in memory).

            My comments are also for the benefit of the many folks (particularly junior members of our community) that perhaps don't have exposure to this way of looking at things.

      • lawn 14 hours ago

        I mean, all data is just binary in the end.

        In most programming languages strings contains more semantics than just sequences of bytes.

        For example in Rust all Strings are utf-8, so Rust strings are binary sequences but not all binary sequences can be Rust strings.

  • hakunin a day ago

    As a blogger who makes similar assumptions, I think we depend on how a lot of us from that time "grew up" similarly. Sockets came to relevance later in my career compared to everything else listed here.

    • kace91 a day ago

      That might be part of it, yes.

      As someone younger, ports and sockets appeared very early in my learning. I'd say they appeared in passing before programming even, as we had to deal with router issues to get some online games or p2p programs to work.

      And conversely, some of the other topics are in the 'completely optional' category. Many of my colleagues work on IDEs from the start, and some may not even have used git in its command line form at all, though I think that extreme is more rare.

    • Thorrez a day ago

      git was released in 2005.

      >The term socket dates to the publication of RFC 147 in 1971, when it was used in the ARPANET. Most modern implementations of sockets are based on Berkeley sockets (1983), and other stacks such as Winsock (1991).

      https://en.wikipedia.org/wiki/Network_socket

      • kragen a day ago

        I read RFC 147 the other day, and it turns out that by "socket" it means "port number", more or less (though maybe they were proposing to also include the host number in the 32-bit "socket", which was quietly dropped within the next few months). Also Berkeley sockets are from about 01979, which is a huge difference from 01983.

        • Thorrez 12 hours ago

          Wikipedia is pretty convinced Berkeley sockets are from 1983[1]. Here's another site saying the same thing[2]. Do you have a source saying 1979?

          [1] https://en.wikipedia.org/wiki/Berkeley_sockets

          [2] https://medium.com/theconsole/40-years-of-berkeley-sockets-8...

          • kragen 12 hours ago

            https://gunkies.org/wiki/4.1_BSD says 01981. I was wrong; 3BSD in 01979 did not have sockets, and neither did 4BSD in 01980. Joy and Fabry's proposal to DARPA in August 01981 https://www.jslite.net/notes/joy2.pdf proposes:

            > Initially we intend to add the facilities described here to UNIX. We will then begin to implement portions of UNIX itself using the IPC [inter-process communication] as an implementation tool. This will involve layering structure on top of the IPC facilities. The eventual result will be a distributed UNIX kernel based on the IPC framework.

            > The IPC mechanism is based on an abstraction of a space of communicating entities communicating through one or more sockets. Each socket has a type and an address. Information is transmitted between sockets by send and receive operations. Sockets of specific type may provide other control operations related to the specific protocol of the socket.

            They did deliver sockets more or less as described in 4.1BSD later that year, but distributing the Unix kernel never materialized. The closest thing was what Joy would later bring about at Sun, NFS and YP (later NIS). They clarify that they had a prototype working already:

            > A more complete description of the IPC architecture described here, measurements of a prototype implementation, comparisons with other work and a complete bibliography are given in CSRG TR/3: "An IPC Architecture for UNIX."

            And they give a definition for struct in_addr, though not today's definition. Similarly they use SOCK_DG and SOCK_VC rather than today's SOCK_DGRAM and SOCK_STREAM, offering this sample bit of source:

                s = socket(SOCK_DG, &addr, &pref);
            
            CSRG TR/3 does not seem to have been promoted to an EECS TR, because I cannot find anything similar in https://www2.eecs.berkeley.edu/Pubs/TechRpts/. And they evidently didn't check their "prototype" socket implementation in to source control until November 01981: https://github.com/robohack/ucb-csrg-bsd/commit/9a54bb7a2aa0...

            In theory that's four months after the 4.1BSD release in http://bitsavers.trailing-edge.com/bits/UCB_CSRG/4.1_BSD_198..., linked from https://gunkies.org/wiki/4.1_BSD, which does seem to have sockets in some minimal form. I don't understand the tape image format, but the string "socket" occurs: "Protocol wrong type for socket^@Protocol not available^@Protocol not supported^@Socket type not supported^@Operation not supported on socket^@Protocol family not supported^@Address family not supported by protocol family^@Address already in use^@Can't assign requested address^@".

            This is presumably compiled from lib/libc/gen/errlst.c or its moral equivalent (e.g., there was an earlier version that was part of the ex editor source code). But those messages were not added to the checked-in version of that file until Charlie Root checked in "get rid of mpx stuff" in February of 01982: https://github.com/robohack/ucb-csrg-bsd/commit/96df46d72642...

            The 4.1 tape image I linked above does not contain man pages for sockets. Evidently those weren't added until 4.2! The file listings in burst/00002.txt mention finger and biff, but those could have been non-networked versions (although Finger was a documented service on the ARPANet for several years at that point, with no sign of growing into a networked hypertext platform with mobile code). Delivermail, the predecessor of sendmail, evidently had cmd/delivermail/arpa-mailer.8, cmd/delivermail/arpa.c, etc.

            That release was actually the month before Joy and Fabry's proposal, so perhaps sockets were still a "prototype" in that release?

            The current sockaddr_in structure was checked in to source control as a patch to sys/netinet/in.h on November 18, 01981: https://github.com/robohack/ucb-csrg-bsd/commit/b5bb9400a15e...

            Kirk McCusick's "Twenty Years of Berkeley Unix" https://www.oreilly.com/openbook/opensources/book/kirkmck.ht... says:

            > When Rob Gurwitz released an early implementation of the TCP/IP protocols to Berkeley, Joy integrated it into the system and tuned its performance. During this work, it became clear to Joy and Leffler that the new system would need to provide support for more than just the DARPA standard network protocols. Thus, they redesigned the internal structuring of the software, refining the interfaces so that multiple network protocols could be used simultaneously.

            > With the internal restructuring completed and the TCP/IP protocols integrated with the prototype IPC facilities, several simple applications were created to provide local users access to remote resources. These programs, rcp, rsh, rlogin, and rwho were intended to be temporary tools that would eventually be replaced by more reasonable facilities (hence the use of the distinguishing "r" prefix). This system, called 4.1a, was first distributed in April 1982 for local use; it was never intended that it would have wide circulation, though bootleg copies of the system proliferated as sites grew impatient waiting for the 4.2 release.

            rcmd, rexec, rsh, rlogin, and rlogind were checked into SCCS on April 2, 01982. At first glance, this socket code looks like it would compile today: https://github.com/robohack/ucb-csrg-bsd/commit/58a2fc8197d0...

            Telnet, also using sockets, had been checked in earlier on February 28: https://github.com/robohack/ucb-csrg-bsd/commit/0dd802d6a649...

  • goranmoomin a day ago

    I haven't even realized that while I was reading the article, but it is amusing!

    Though one explanation is that I think for the other stuff that the writer doesn't explain, one can just guess and be half right, and even if the reader guesses wrong, isn't critical to the bug ­— but sockets and capabilities are the concepts that are required to understand the post.

    It still is amusing and I wouldn't have even realized that until you pointed that out.

  • Retric a day ago

    I found that specific clarification useful while everything else was easy to follow.

    It’s not that I was unaware that’s how Unix worked here, just that I rarely think of sockets in that context.

  • dwedge a day ago

    I found it interesting that they know how to use strace, but not how to list open files held by a process which to me seems simpler. Again, not criticism just an observation and I enjoyed the article

    • parliament32 a day ago

      Given the "(hi Julia!)" immediately after the strace shenanigans, I interpreted this as a third-party hint; the author most likely had not used strace before.

      The author is both an example of and an example for how we can get caught in "bubbles" of tools/things we know and use and don't, and blog posts like this are great for discovery (I didn't know about git invoking a binary in the path like his "git re-edit", for example, until today).

      • dwedge 19 hours ago

        I discovered that by accident, I had a script called git-pr that opened a pull request with github using the last commit message and then pushed it to slack for approval. I was trying to rewrite it to add a description and wondered why "git pr" pushed an empty message to slack

  • derefr a day ago

    All of the things you listed are ops topics. But sockets are a programming concept.

    I would expect a person with 10+ years of Unix sysadmin experience — but who has never programmed directly against any OS APIs, “merely” scripting together invocations of userland CLI tools — to have exactly this kind of lopsided knowledge.

    (And that pattern is more common than you might think; if you remember installing early SuSE or Slackware on a random beige box, it probably applies to you!)

  • mr_toad a day ago

    Most people these days are using http and don’t need to touch sockets. (Except for the people implementing http of course).

  • kragen a day ago

    To be fair, it does link the CVE, so if you don't know what a CVE is, you can click the link.

    I agree that it's amusing.

  • jsrcout a day ago

    Yep. The Curse of Knowledge is a real thing.

  • addled a day ago

    I mean, the title is a quote from Buckaroo Banzai. Lack of context is part of the fun!

lanthade a day ago

Don't tug on that applies to hardware too.

Years ago I worked on contract for a large blue 3 letter company doing outsourced server management for the fancy credit card company. The incident in question happened before my time on the team but I heard about it first hand from the server admin (let's call him Ben) who had been at the center of it.

The data center in question was (IIRC) 160K sqft of raised floor spread across multiple floors in a major metropolitan downtown area. It isn't there anymore. Windows, Unix, Linux, mainframe, San, all the associated fun stuff.

Ben was working the day after thanksgiving decommissioning a system. Full software and physical decommission. Approved through all the proper change management procedures.

As part of the decommission Ben removed the network cables from under raised floor. Standard snip the connector off and pull it back. Easy. Little did he know that network cable was ever so slightly entangled with another cable. Not enough to give him pause when pulling it though. It wouldn't have been an issue if the other cable had been properly latched in its ports. It wasn't. That little pull ended up pulling the network connection out of a completely unrelated system. A system managed by a completely different group. A system responsible for credit card processing. On USA Black Friday.

Oops. CC processing went down. It took far too long to resolve. Amazingly Ben didnt loose his job. After all he followed all the processes and procedures. Kudos to the management team who kept him protected.

Change management and change freezes were far more stringent by the time I joined the team. There was also now a raised floor infrastructure group and no one pulled a tile without their involvement.

Be careful what you tug on!

adrianmonk a day ago

> This computer stuff is amazingly complicated. I don't know how anyone gets anything done.

I wonder what could be done to make this type of problem less hidden and easier to diagnose.

The one thing that comes to mind is to have the loader fail fast. For security reasons, the loader needs to ensure TMPDIR isn't set. Right now it accomplishes this by un-setting TMPDIR, which leads to silent failures. Instead, it could check if TMPDIR is set, and if so, give a fatal error.

This would force you to unset TMPDIR yourself before you run a privileged program, which would be tedious, but at least you'd know it was happening because you'd be the one doing it.

(To be clear, I'm not proposing actually doing this. It would break compatibility. It's just interesting to think about alternative designs.)

  • fweimer 16 hours ago

    Then you'd have to add a wrapper script to su and similar programs that unsets all relevant environment variables. That set is not necessarily fixed; a future version of glibc may well require clearing NSS_FILES_hosts as well.

    (This is about UNSECURE_ENVVARS, if someone needs to find the source location.)

    Making these things more transparent is a good idea, of course, but it is somewhat hard. Maybe we could add Systemtap probes when environment variables are removed or ignored.

    A related issue is that people stick LD_LIBRARY_PATH and LD_PRELOAD settings into shell profiles/login scripts and forget about them, leading to hard-to-diagnose failures. More transparency there would help, but again it's hard to see how to accomplish that.

  • tetha a day ago

    Mh, I am starting to dislike this kind of hyper-configurability.

    I know when this was necessary and used it myself quite a bit. But today, couldn't we just open up a mount namespace and bind-mount something else to /tmp, like SystemDs private tempdirs? (Which broke a lot of assumptions about tmpdirs and caused a bit of ruckus, but on the other hand, I see their point by now)

    I'm honestly starting to wonder about a lot of these really weird, prickly and fragile environment variables which cause security vulnerabilities, if low-overhead virtualization and namespacing/containers are available. This would also raise the security floor.

    • josephcsible a day ago

      > But today, couldn't we just open up a mount namespace and bind-mount something else to /tmp, like SystemDs private tempdirs?

      No, because unless you're already root (in which case you wouldn't have needed the binary with the capability in the first place), you can't make a mount namespace without also making a user namespace, and the counterproductive risk-averse craziness has led to removing unprivileged users' ability to make user namespaces.

      • kragen a day ago

        It's probably true that there are setuid programs that can be exploited if you run them in a user namespace. You probably need to remove setuid (and setgid) as Plan9 did in order to do this.

        • josephcsible a day ago

          I meant distros are moving towards no unprivileged user namespaces at all, not just no setuid programs inside them.

          • kragen a day ago

            Is "just no setuid programs inside them" even an option?

  • ericmcer a day ago

    It is complex. There was another posting on HN where commenters were musing over why software projects have a much higher failure rate than any other engineering discipline.

    Are we just shittier engineers, is it more complex, or is the culture such that we output lower quality? Does building a bridge require less cognitive load then a complex software project?

    • rout39574 a day ago

      I think it's a cultural acceptance of lower quality, happily traded for deft execution, over and over.

      We're better at encapsulating lower-level complexities in e.g. bridge building than we are at software.

      All the complexities of, say, martensite grain boundaries and what-not are implicit in how we use steel to reinforce concrete. But we've got enough of it in a given project that the statistical summaries are adequate. It's a member with thus strength in tension, and thus in compression, and we put a 200% safety factor in and soldier on.

      And nobody can take over the ownership of leftpad and suddenly falsify all our assumptions about how steel is supposed to act when we next deploy ibeam.js ...

      The most well understood and dependable components of our electronic infrastructure are the ones we cordially loathe because they're composed in shudder COBOL, or CICS transactions, or whatever.

      • LorenPechtel a day ago

        Exactly. The properties rarely matter outside the item. The column is of such-and-such a strength, that's it. But when things get strange we see failures. Perfect example: Challenger. Was the motor safe sitting on the pad? Yes. Was the motor safe in flight? Yes. Was the motor safe at ignition? On the test stand, yes. Stacked for launch, ignition caused the whole stack to twang--and maybe the seals failed....

    • knollimar 9 hours ago

      I write some software to automate some engineering processes in construction.

      I'd say it comes from some of (order of most to least imo, but I'm only mid level so take what I say accordingly):

      * physical processes have a fuzzy good enough. The bridge stands with thrice its expected max load. It is good enough.

      * most software doesn't have life safety behind it. In construction, life safety systems receive orders of magnitude more scrutiny.

      * physical projects don't have more than 20 different interdependencies; there's an upper limit on arbitrary complexity

      * physical projects usually have clearish deadlines (they lie, but by a constant factor)

      * The industries are old enough that they check juniors before they give them big decisions.

      * Similarly, there exists PE accountability in construction

    • jcynix a day ago

      > Are we just shittier engineers, is it more complex [...]

      Both IMO: first, anybody could buy a computer during the last three decades, dabble in programming without learning basic concepts of software construction and/or user-interface design and get a job.

      And copying bad libraries was (and is) easy. I still get angry when software tells me "this isn't a valid phone number" when I cut/copy/paste a number with a blank or a hyphen between digits. Or worse, libraries which expect the local part of an email address to only consist of alphanumeric characters and maybe a hyphen.

      Second, writing software definitely is more complex than building physical objects. Because there are "no laws" of physics which limit what can be done. In the sense that physics tell you that you need to follow certain rules to get a stable building or a bridge capable of withstanding rain, wind, etc.

      • jz391 a day ago

        Absolutely. As an Electrical Engineer turned software guy, Ohm's/Kirchhoff's laws remain as valid and significant as when I was taught them 35 years ago. For software however, growth of hardware architectures/constraints made it possible to add much more functionality. My first UNIX experience was on PDP-11/44, where every process (and kernel) had access to an impressive maximum of 128K of RAM (if you figured out the flag to split address and data segments). This meant everything was simple and easy to follow: the UNIX permission model (user/group/other+suid/sgid) fit it well. ACLs/capabilities etc were reserved for VMS/Multics, with manuals spanning shelves.

        Given hardware available to an average modern Linux box, it is hardly surprising that these bells and whistles were added - someone will find them useful in some scenarios and additional resource is negligible. It does however make understanding the whole beast much, much harder...

    • kragen a day ago

      It's just that people are somewhat rational.

      There are no big wins left in bridge building, so there is no justification for taking big risks. Also, in most software project failures, the only cost is people's time; no animals are harmed, no irreplaceable antique guitars are smashed, no ecosystems are damaged, and no buses of schoolchildren plunge screaming into an abyss.

      Your software startup didn't get funded? Well, you can go back and finish college.

  • LorenPechtel a day ago

    Yeah, this case is a good example of why many people don't like linux. Too much interconnected stuff that can go wrong.

jcynix a day ago

BTW, the author "mjd" is the author of the excellent book "Higher-Order Perl" which is available online at https://hop.perl.plover.com/book/

  • pinkmuffinere a day ago

    I love mjd! He once replied to me on an HN thread and it lives forever in my memory :)

linsomniac a day ago

The Internet needs more Buckaroo Banzai references. Because wherever you go, there you are.

  • neilk a day ago

    Yup. I nearly had this movie memorized when I was a child.

    https://www.youtube.com/watch?v=aWXuDNmO7j8

    Peter Weller, playing Buckaroo Banzai, is late for his military-particle-physics-interdimensional-jet-car test because he's helping Jeff Goldblum's character with neurosurgery. Later that day he will go play lead guitar in an ensemble.

    Scriptwriting gurus advise that your protagonist should have flaws and character progression. The writers of this movie disagree.

thayne a day ago

Setting a capability on the perl executable seems like a very bad idea. That effectively grants tha capability to everything that is able to invoke perl (without being restricted to NO_NEW_PRIVILEGES).

  • aftbit a day ago

    Yeah why did he want non-root perl to be able to bind to low-numbered ports? Seems like one of those typical footguns of applying non-standard configurations.

    • thayne a day ago

      My reading is the author didn't do that, rather his/her employers configuration system had done so.

      Setting TMPDIR to /mnt/tmp seems also to come from that.

      I would guess both were the result of someone who didn't really know what they were doing trying things until they found something that got what they needed to work, then pushed that out without understanding the broader implications.

markstos a day ago

And this was written 10 years ago, when computers were far less complicated and vibe coding sleeper bugs wasn't a thing.

  • WJW a day ago

    Vibe coded sleeper bugs have always been a thing, they just came from the bosses' nephew who was still learning PHP at the time and left several years ago.

    Also, computers in 2015 were not meaningfully less complex than today. Certainly not when the topic is weird emacs and perl interactions.

    • marcosdumay a day ago

      Even if the topic was web applications (that are where Big Complexity thrives), 2015 was about peak complexity. Things have improved a bit since then.

    • add-sub-mul-div a day ago

      The problem isn't that AI is doing something new, we all know that it isn't. The problem is that the boss' nephew is becoming the rule now rather than the exception.

      • jama211 a day ago

        It also makes bugs easier to find and resolve. You win some you lose some. Perhaps by the time it is the rule they’ll be better at writing safer code.

  • detourdog a day ago

    From my perspective vibe coding was always a thing.

eichin 21 hours ago

A couple of paragraphs in I started wondering if it was going to turn out to be systemd-tmpfiles (in ubuntu, 16.04 I think? The symptom was "about 10 days after login, X11 forwarding over ssh stopped working but local apps could still open windows just fine." I remember it as an ubuntu-specific misconfiguration, though I think the systemd defaults were changed to be less of a footgun in response...)

I was pleased that it was more interesting than that, and I want people to write more twitchy-detail-post-mortems like this :-)

LordGrey a day ago

Buckaroo Banzai: You can check your anatomy all you want, and even though there may be normal variation, when it comes right down to it, this far inside the head it all looks the same. No, no, no, don’t tug on that. You never know what it might be attached to.

  • buckaroobanzi a day ago

    See, this is the point, for me, where it started to look like a problem. You know, I wanted to sacrifice the precentral vein in order to get some exposure, but because of this guy's normal variation, I got excited, and all of a sudden I didn't know whether I was looking at the precentral vein, or one of the internal cerebral veins, or the vein of Galen, or the vascular vein of Rosenthal. So, on my own, to me, at this point, I was ready to say that's it, let's get out.

klodolph a day ago

I have an overly reductive take on this—it’s Unix environment variables.

You have your terminal window and your .bashrc (or equivalent), and that sets a bunch of environment variables. But your GUI runs with, most likely, different environment variables. It sucks.

And here’s my controversial take on things—the “correct” resolution is to reify some higher-level concept of environment. Each process should not have its own separate copy of environment variables. Some things should be handled… ugh, I hate to say it… through RPC to some centralized system like systemd.

  • rao-v a day ago

    Windows registry just sort of hovering in the backdrop

    • klodolph 21 hours ago

      Something that is still inheritable, between “there is one and it is global” and “there is a separate copy for each process”.