This is not the first time it’s bitten people. It’s not safe, and honestly GitHub should have better controls around it or remove and rework it — it is a giant footgun.
> One of our engineers figured out this was because it triggered on: pull_request which means external contributions (which come from forks, rather than branches in the repo like internal contributions) would not have the workflow automatically run. The fix for this was changing the trigger to be on: pull_request_target, which runs the workflow as it's defined in the PR target repo/branch, and is therefore considered safe to auto-run.
There are so many things about GitHub Actions that make no sense.
Why are actions configured per branch? Let me configure Actions somewhere on the repository that is not modifiable by some yml files that can exist in literally any branch. Let me have actual security policy for configuring Actions that is separate from permission to modify a given branch.
Why do workflows have such strong permissions? Surely each step should have defined inputs (possibly from previous steps), defined outputs, and narrowly defined permissions.
Why can one step corrupt the entire VM for subsequent steps?
Why is security almost impossible to achieve instead of being the default?
Why does the whole architecture feel like someone took something really simple (read a PR or whatever, possibly run some code in a sandbox, and produce an output) of the sort that could easily be done securely in JavaScript or WASM or Lua or even, sigh, Docker and decided to engineer it in the shape of an enormous cannon aimed directly at the user’s feet?
While I agree with the general sentiment that lots of things about GH actions don't make sense, when you actually look at what the vulnerability was, you'll find that for lots of your questions it wasn't GitHub Actions' fault.
This workflow uses `pull_request_target` targeting where the actions are configured by the branch you're merging PR into, which should be safe - attacker can't modify the YML actions are running.
> Why do workflows have such strong permissions?
What permissions are workflow run with is irrelevant here, because the workflow runs the JS script with a custom access token instead of the permissions associated with the GH actions runner by default.
> Why is security almost impossible to achieve instead of being the default?
The default for `pull_request_target` is to checkout the branch you're trying to merge into (which again should be safe as it doesn't contain attacker's files), but this workflow explicitly checks out the attacker's branch on line 22.
A way to determine a workflow per branch, inside the branch, is useful for developing workflows. But it's perilous in other circumstances.
I wish I could, at the repo level, disable the use of actions from ./.github, and instead name another repo as the source of actions.
This could be achieved by defining a pre-merge-commit hook, and reject commits that alter protected parts of the tree. This would also require extra checks on the action runnes side.
Does anyone have experience putting their production branches in a separate repo from their development branches?
GitHub makes it very easy to make a pull request from one repo into another.
This would seem to have a lot of benefits: you can have different branch protection rules in the different repos, different secrets.
Would it be a pain in the ass?
For an open source project you could have an open contribution model, but then only allow core maintainers to have write access in the production repo to trigger a release. Or maybe even make it completely private.
At a previous employer we did this with our docs repo.
The public docs site was managed and deployed via a private GitHub repository, and we had a public GitHub repo that mirrored it.
The link between them was an action on the private repo that pushed each new man commit to the mirror. Customer PRs on the public mirror would be merged into the private repo, auto synced to the mirror, and GH would mark the public PR as merged when it noticed the PR commits were all on main.
It was a bit of a headache, but worked well enough once stag involved in docs built up some workflow conventions. The driver for the setup was the docs writers want the option to develop pre-release docs discretely, but customer contributions were also valued.
"We also suggest you make use of the minimumReleaseAge setting present both in yarn and pnpm. By setting this to a high enough value (like 3 days), you can make sure you won't be hit by these vulnerabilities before researchers, package managers, and library maintainers have the chance to wipe the malicious packages."
Long story short: they messed up the assign-reviewers.yml workflow, allowing external contributors to merge PRs without proper reviews. From this point on, you're fully open to all kinds of bad stuff.
The workflow was configured in a way that allowed untrusted code from a branch controlled by the attacker to be executed in the context of a GitHub action workflow that had access to secrets.
Why does it need to be a distinct product and not Cursor/ChatGPT/Claude code/any of the other existing tools?
(If you're so anti-AI that you're still writing boilerplate like that by hand, I mean, not gonna tell you what you do, but the rest of us stopped doing that crap as soon as it was evident we didn't have to any more.)
This is a great writeup, kudos for the PostHog folks.
Curious: would you be able to make your original exploitable workflow available for analysis? You note that a static analysis tool flagged it as potentially exploitable, but that the finding was suppressed under the belief that it was a false positive. I'm curious if there are additional indicators the tool could have detected that would have reduced the likelihood of premature suppression here.
(I tried to search for it, but couldn't immediately find it. I might be looking in the wrong repository, though.)
> The PR was opened, the workflow run, and the PR closed within the space of 1 minute (screenshots include timestamps in UTC+2, the author's timezone):
It's an unfortunately common problem with GitHub Actions, it's easy to set things up to where any PR that's opened against your repo runs the workflows as defined in the branch. So you fork, make a malicious change to an existing workflow, and open a PR, and your code gets executed automatically.
Frankly at this point PRs from non-contributors should never run workflows, but I don't think that's the default yet.
Problem is that you might want to have the tests run before even looking at it.
I think the mistake was to put secrets in there and allow publishing directly from github's CI.
Hilariously the people at pypi advise to use trusted publishers (publishing on pypi from github rather than local upload) as a way to avoid this issue.
> Problem is that you might want to have the tests run before even looking at it.
Why is this a problem? The default `pull_request` trigger isn't dangerous in GitHub Actions; the issue here is specifically with `pull_request_target`. If all you want to do is have PRs run tests, you can do that with `pull_request` without any sort of credential or identity risk.
> Hilariously the people at pypi advise to use trusted publishers (publishing on pypi from github rather than local upload) as a way to avoid this issue.
There are two separate things here:
1. When we designed Trusted Publishing, one of the key observations was that people do use CI to publish, and will continue to do so because it conveys tangible benefits (mostly notably, it doesn't tie release processes to an opaque phase on a developer's machine). Given that people do use CI to publish, giving them a scheme that provides self-expiring, self-scoping credentials instead of long-lived ones is the sensible thing to do.
2. Separately, publishing from CI is probably a good thing for the median developer: developer machines are significantly more privileged than the average CI runner (in terms of access to secrets/state that a release process simply doesn't need). One of the goals behind Trusted Publishing was to ensure that people could publish from an otherwise minimal CI environment, without even needing to configure a long-lived credential for authentication.
Like with every scheme, Trusted Publishing isn't a magic bullet. But I think the proscription to use it here is essentially correct: Shai-Hulud propagates through stored credentials, and a compromised credential from a TP flow is only useful for a short period of time. In other words, Trusted Publishing would make it harder for the parties behind Shai-Hulud to group and orchestrate the kinds of compromise waves we're seeing.
It does largely avoid the issue if you configure to allow only specific environments AND you require reviews before pushing/merging to branches in that environment.
Yes and anyone who knows anything about software dev knows that the first thing you should do with an important repo is set up branch protections to disallow that, and require reviews etc. Basic CI/CD.
This incident reflects extremely poorly on PostHog because it demonstrates a lack of thought to security beyond surface level. It tells us that any dev at PostHog has access at any time to publish packages, without review (because we know that the secret to do this is accessible from plain GHA secret which can be read from any GHA run which presumably run on any internal dev's PR). The most charitable interpretation of this is that it's consciously justified by them because it reduces friction, in which case I would say that demonstrates poor judgement, a bad balance.
A casual audit would have revealed this and suggested something like restricting the secret to a specific GHA environment and requiring reviews to push to that env. Or something like that.
“ At 5:40PM on November 18th, now-deleted user brwjbowkevj opened a pull request against our posthog repository, including this commit. This PR changed the code of a script executed by a workflow we were running against external contributions, modifying it to send the secrets available during that script's execution to a webhook controlled by the attacker. These secrets included the Github Personal Access Token of one of our bots, which had broad repo write permissions across our organization.”
Paired with a long lived GitHub access token that had more access than needed for this operation. GitHub Actions has some features for short lived tokens that are not stored in static action secrets. I’m not quite sure why a bot user was actually needed here. Then there is the simple fact that lots of developers over provision their environments. Every sessions hosts hundreds of env variables for all kinds of things. From docker to GitHub tokens etc.
we started to oidc all the things in Jenkins and GitHub actions to guard secrets to be accessible only by certain repos and branches inside them. But the more you shut that down the more flexibility you loose. Or you need even more automation to help with access management.
Imagine my surprise that the company that posts "Collaboration sucks" and endorses a YOLO approach to decision making then has a security breach based on misconceptions of a GitHub action that was caught by security tools and could have been proven out via collaboration or a metered approach to decision making.
Other than the silly design, the website's cookie banner is actively malicious. It proclaims to be legally required and directly blames the President of the European Commission. If Posthog is being truthful about its cookie usage, the cookie banner is in fact not legally required. Consent banners are only required if you're trying to do individual user tracking or collecting personally identifying data; technical cookies like session storage do not require a banner. That they then chose to include a cookie banner anyways, with explicit blame, is an act of propaganda clearly intended to cause unnecessary consent banner fatigue and weaken support for the GDPR.
I don't have a cookie banner on _my_ website for exactly this reason, but I have to admit some people have asked my if it isn't suspicious that I don't. Perhaps that's what they're trying to avoid here? (that would be the positive reading)
I think that's what Posthog might be trying but as per the above there may be a fine line between funny and annoying and/or between useful and useless.
I didn’t know what Posthog was before this event but the website is so unusable on Safari on MacOS or iOS for me i’m surprised I stuck through to discover the product.
Curious, I pressed "X" on the blog post. It went away, leaving me with the fake desktop view at "posthog.com". Ok, fine. How do I get back?
I pressed the back button on my browser. The URL updated to be the blog post's URL. A good start. But the UI did not change, leaving me at the desktop view.
Without JavaScript, all I get is a background image and a top "navigation bar" where the only thing that's actually operable at all is a signup link. Which then goes to a completely blank page.
I still don't know what Posthog is, but I'm now committed to never using it if I can at all help it.
We are taking about a company’s JavaScript libraries (the npm attack). Knowing that, I’m pretty sure that people who browse without JavaScript enabled aren’t their target market.
I’m apparently also not in their market so, the best I ca say from the website is (hand wavy) “website analytics”.
a) ok whippersnapper, b) new community members have the most energy. I’m not actually sure there’s much need for volunteer mods on HN tbh, but the best volunteers are often the newest folks around.
Well wagging the same finger twice in the same comment section on the same point is overenergetic in my book.
At least the second time it should have become obvious that the comments were voicing a common response of visitors to the site, so were constructive rather than nitpicking.
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Dear everyone.
Never use pull_request_target.
This is not the first time it’s bitten people. It’s not safe, and honestly GitHub should have better controls around it or remove and rework it — it is a giant footgun.
> One of our engineers figured out this was because it triggered on: pull_request which means external contributions (which come from forks, rather than branches in the repo like internal contributions) would not have the workflow automatically run. The fix for this was changing the trigger to be on: pull_request_target, which runs the workflow as it's defined in the PR target repo/branch, and is therefore considered safe to auto-run.
Dear GitHub Actions: what the heck?
There are so many things about GitHub Actions that make no sense.
Why are actions configured per branch? Let me configure Actions somewhere on the repository that is not modifiable by some yml files that can exist in literally any branch. Let me have actual security policy for configuring Actions that is separate from permission to modify a given branch.
Why do workflows have such strong permissions? Surely each step should have defined inputs (possibly from previous steps), defined outputs, and narrowly defined permissions.
Why can one step corrupt the entire VM for subsequent steps?
Why is security almost impossible to achieve instead of being the default?
Why does the whole architecture feel like someone took something really simple (read a PR or whatever, possibly run some code in a sandbox, and produce an output) of the sort that could easily be done securely in JavaScript or WASM or Lua or even, sigh, Docker and decided to engineer it in the shape of an enormous cannon aimed directly at the user’s feet?
While I agree with the general sentiment that lots of things about GH actions don't make sense, when you actually look at what the vulnerability was, you'll find that for lots of your questions it wasn't GitHub Actions' fault.
This is the vulnerable workflow in question: https://github.com/PostHog/posthog/blob/c60544bc1c07deecf336...
> Why are actions configured per branch?
This workflow uses `pull_request_target` targeting where the actions are configured by the branch you're merging PR into, which should be safe - attacker can't modify the YML actions are running.
> Why do workflows have such strong permissions?
What permissions are workflow run with is irrelevant here, because the workflow runs the JS script with a custom access token instead of the permissions associated with the GH actions runner by default.
> Why is security almost impossible to achieve instead of being the default?
The default for `pull_request_target` is to checkout the branch you're trying to merge into (which again should be safe as it doesn't contain attacker's files), but this workflow explicitly checks out the attacker's branch on line 22.
A way to determine a workflow per branch, inside the branch, is useful for developing workflows. But it's perilous in other circumstances.
I wish I could, at the repo level, disable the use of actions from ./.github, and instead name another repo as the source of actions.
This could be achieved by defining a pre-merge-commit hook, and reject commits that alter protected parts of the tree. This would also require extra checks on the action runnes side.
Does anyone have experience putting their production branches in a separate repo from their development branches?
GitHub makes it very easy to make a pull request from one repo into another.
This would seem to have a lot of benefits: you can have different branch protection rules in the different repos, different secrets.
Would it be a pain in the ass?
For an open source project you could have an open contribution model, but then only allow core maintainers to have write access in the production repo to trigger a release. Or maybe even make it completely private.
At a previous employer we did this with our docs repo.
The public docs site was managed and deployed via a private GitHub repository, and we had a public GitHub repo that mirrored it.
The link between them was an action on the private repo that pushed each new man commit to the mirror. Customer PRs on the public mirror would be merged into the private repo, auto synced to the mirror, and GH would mark the public PR as merged when it noticed the PR commits were all on main.
It was a bit of a headache, but worked well enough once stag involved in docs built up some workflow conventions. The driver for the setup was the docs writers want the option to develop pre-release docs discretely, but customer contributions were also valued.
TIL: yarn/pnpm has a minimumReleaseAge setting.
"We also suggest you make use of the minimumReleaseAge setting present both in yarn and pnpm. By setting this to a high enough value (like 3 days), you can make sure you won't be hit by these vulnerabilities before researchers, package managers, and library maintainers have the chance to wipe the malicious packages."
Long story short: they messed up the assign-reviewers.yml workflow, allowing external contributors to merge PRs without proper reviews. From this point on, you're fully open to all kinds of bad stuff.
That’s not what happened at all
The attacker did not need to merge any PRs to exfiltrate the credentials
What actually happened:
The workflow was configured in a way that allowed untrusted code from a branch controlled by the attacker to be executed in the context of a GitHub action workflow that had access to secrets.
more so in case you actually do the "secrets on github with the right to do meaningful things"
Yeah that's a pretty deadly combo.
Here's an AI product I would actually use: Write my damn GH actions yml for me.
Oh, and describe for me exactly how it works and why. And be right about it.
Except the model would have been trained on the available corpus of known runners and will achieve the same average level of quality...
Why does it need to be a distinct product and not Cursor/ChatGPT/Claude code/any of the other existing tools?
(If you're so anti-AI that you're still writing boilerplate like that by hand, I mean, not gonna tell you what you do, but the rest of us stopped doing that crap as soon as it was evident we didn't have to any more.)
Opener source software
This is a great writeup, kudos for the PostHog folks.
Curious: would you be able to make your original exploitable workflow available for analysis? You note that a static analysis tool flagged it as potentially exploitable, but that the finding was suppressed under the belief that it was a false positive. I'm curious if there are additional indicators the tool could have detected that would have reduced the likelihood of premature suppression here.
(I tried to search for it, but couldn't immediately find it. I might be looking in the wrong repository, though.)
Here's the PR that introduced the vulnerability: https://github.com/PostHog/posthog/pull/37915
It's a bit funny the vuln was introduced by someone with the username "haacked"
So it wasn't phishing attack? Wonder how those bot access tokens got stolen.
> The PR was opened, the workflow run, and the PR closed within the space of 1 minute (screenshots include timestamps in UTC+2, the author's timezone):
It's an unfortunately common problem with GitHub Actions, it's easy to set things up to where any PR that's opened against your repo runs the workflows as defined in the branch. So you fork, make a malicious change to an existing workflow, and open a PR, and your code gets executed automatically.
Frankly at this point PRs from non-contributors should never run workflows, but I don't think that's the default yet.
Problem is that you might want to have the tests run before even looking at it.
I think the mistake was to put secrets in there and allow publishing directly from github's CI.
Hilariously the people at pypi advise to use trusted publishers (publishing on pypi from github rather than local upload) as a way to avoid this issue.
https://blog.pypi.org/posts/2025-11-26-pypi-and-shai-hulud/
> Problem is that you might want to have the tests run before even looking at it.
Why is this a problem? The default `pull_request` trigger isn't dangerous in GitHub Actions; the issue here is specifically with `pull_request_target`. If all you want to do is have PRs run tests, you can do that with `pull_request` without any sort of credential or identity risk.
> Hilariously the people at pypi advise to use trusted publishers (publishing on pypi from github rather than local upload) as a way to avoid this issue.
There are two separate things here:
1. When we designed Trusted Publishing, one of the key observations was that people do use CI to publish, and will continue to do so because it conveys tangible benefits (mostly notably, it doesn't tie release processes to an opaque phase on a developer's machine). Given that people do use CI to publish, giving them a scheme that provides self-expiring, self-scoping credentials instead of long-lived ones is the sensible thing to do.
2. Separately, publishing from CI is probably a good thing for the median developer: developer machines are significantly more privileged than the average CI runner (in terms of access to secrets/state that a release process simply doesn't need). One of the goals behind Trusted Publishing was to ensure that people could publish from an otherwise minimal CI environment, without even needing to configure a long-lived credential for authentication.
Like with every scheme, Trusted Publishing isn't a magic bullet. But I think the proscription to use it here is essentially correct: Shai-Hulud propagates through stored credentials, and a compromised credential from a TP flow is only useful for a short period of time. In other words, Trusted Publishing would make it harder for the parties behind Shai-Hulud to group and orchestrate the kinds of compromise waves we're seeing.
It does largely avoid the issue if you configure to allow only specific environments AND you require reviews before pushing/merging to branches in that environment.
https://docs.pypi.org/trusted-publishers/adding-a-publisher/
For a malicious version to be published would then require full merge which is a fairly high bar.
AWS allows similar
As we're seeing, properly configuring github actions is rather hard. By default force pushes are allowed on any branch.
Yes and anyone who knows anything about software dev knows that the first thing you should do with an important repo is set up branch protections to disallow that, and require reviews etc. Basic CI/CD.
This incident reflects extremely poorly on PostHog because it demonstrates a lack of thought to security beyond surface level. It tells us that any dev at PostHog has access at any time to publish packages, without review (because we know that the secret to do this is accessible from plain GHA secret which can be read from any GHA run which presumably run on any internal dev's PR). The most charitable interpretation of this is that it's consciously justified by them because it reduces friction, in which case I would say that demonstrates poor judgement, a bad balance.
A casual audit would have revealed this and suggested something like restricting the secret to a specific GHA environment and requiring reviews to push to that env. Or something like that.
They do explain all the details how the got the tokens stolen.
It explains in the article under "Why did it happen?".
They explain how.
“ At 5:40PM on November 18th, now-deleted user brwjbowkevj opened a pull request against our posthog repository, including this commit. This PR changed the code of a script executed by a workflow we were running against external contributions, modifying it to send the secrets available during that script's execution to a webhook controlled by the attacker. These secrets included the Github Personal Access Token of one of our bots, which had broad repo write permissions across our organization.”
Which shows the danger of keeping build scripts in your repos and letting users update them themselves.
Paired with a long lived GitHub access token that had more access than needed for this operation. GitHub Actions has some features for short lived tokens that are not stored in static action secrets. I’m not quite sure why a bot user was actually needed here. Then there is the simple fact that lots of developers over provision their environments. Every sessions hosts hundreds of env variables for all kinds of things. From docker to GitHub tokens etc. we started to oidc all the things in Jenkins and GitHub actions to guard secrets to be accessible only by certain repos and branches inside them. But the more you shut that down the more flexibility you loose. Or you need even more automation to help with access management.
Oh. I mist be blind. Well, that's a warning for all.
I was expecting this to be some sort of bizarre Dune satire.
Imagine my surprise that the company that posts "Collaboration sucks" and endorses a YOLO approach to decision making then has a security breach based on misconceptions of a GitHub action that was caught by security tools and could have been proven out via collaboration or a metered approach to decision making.
Posthog's website design feels like a joke that went a bit too far
Other than the silly design, the website's cookie banner is actively malicious. It proclaims to be legally required and directly blames the President of the European Commission. If Posthog is being truthful about its cookie usage, the cookie banner is in fact not legally required. Consent banners are only required if you're trying to do individual user tracking or collecting personally identifying data; technical cookies like session storage do not require a banner. That they then chose to include a cookie banner anyways, with explicit blame, is an act of propaganda clearly intended to cause unnecessary consent banner fatigue and weaken support for the GDPR.
I don't have a cookie banner on _my_ website for exactly this reason, but I have to admit some people have asked my if it isn't suspicious that I don't. Perhaps that's what they're trying to avoid here? (that would be the positive reading)
Maybe you need a "why I don't have a banner" banner.
I think that's what Posthog might be trying but as per the above there may be a fine line between funny and annoying and/or between useful and useless.
or maybe I just missed your sarcasm
I agree it’s stupid but wouldn’t ascribe intent without more information
They made a post how they reinvented ux
okay, now I think this is really a joke. A website where it's not possible to scroll with the keyboard is telling us something about ux.
It not only feels like, scrolling with keyboard is not possible. This is a joke.
[flagged]
Surely we can make an exception when it's this egregious? Like all rules, there are exceptions.
Second time you posted this, are you a moderator?
The slight side scrolling on mobile, and overriding the link alt-click behavior… why
I didn’t know what Posthog was before this event but the website is so unusable on Safari on MacOS or iOS for me i’m surprised I stuck through to discover the product.
Curious, I pressed "X" on the blog post. It went away, leaving me with the fake desktop view at "posthog.com". Ok, fine. How do I get back?
I pressed the back button on my browser. The URL updated to be the blog post's URL. A good start. But the UI did not change, leaving me at the desktop view.
Many moments like these if you use Posthog
Without JavaScript, all I get is a background image and a top "navigation bar" where the only thing that's actually operable at all is a signup link. Which then goes to a completely blank page.
I still don't know what Posthog is, but I'm now committed to never using it if I can at all help it.
We are taking about a company’s JavaScript libraries (the npm attack). Knowing that, I’m pretty sure that people who browse without JavaScript enabled aren’t their target market.
I’m apparently also not in their market so, the best I ca say from the website is (hand wavy) “website analytics”.
The site caused my browser to freeze and it reminded me of the 56k modem days.
Interesting, I remember people here praising their website redesign a while ago.
[flagged]
In this case, I think GP is suggesting this rises above the level of a tangential annoyance.
Also HN doesn't need 11 month old volunteer mods.
a) ok whippersnapper, b) new community members have the most energy. I’m not actually sure there’s much need for volunteer mods on HN tbh, but the best volunteers are often the newest folks around.
Well wagging the same finger twice in the same comment section on the same point is overenergetic in my book.
At least the second time it should have become obvious that the comments were voicing a common response of visitors to the site, so were constructive rather than nitpicking.
It’s tangential, because it’s not about the information posted.
Wow, I hate this website to be honest. So much of the space is taken up by all these "bars" on my already small screen.
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
https://news.ycombinator.com/newsguidelines.html
So I saw the headline and for a moment I was very confused: aren’t sand worms fictional?
Pre-coffee, apparently.