Introduction#
Deno is a JavaScript runtime with an interesting security model.
By default, all access to I/O is restricted.
That is, if you just try to run deno run suspicious.ts, and that program tries to read or write to a file, or access the network, or read from an environment variable (and more), you will be interactively prompted to see if you want to allow access.
For example:
// suspicious.ts
console.log("Hello, world!")
Deno.readFile("file.txt");
Deno.env.get("MY_SECRET_ENVIORNMENT_VARIABLE")
fetch("https://example.com/hello")

The motivation for this is pretty clear.
Modern software development brings in an extremely large amount of dependencies.
As developers we rely on the good will of open source maintainers to not publish malicious code, and the trust that those maintainers employ strong security practices such that they are never compromised and a hacker pushes malicious code on their behalf.
All it takes is a single dependency to do the equivalent of rm -rf /, and you lose all of your files.
Or much worse, silently steal your API keys from your environment variables or .aws directory, install a keylogger, or much much more.
Being a developer in today’s world means accepting remote code execution on our daily driver machines, and in our code running in production.
Supply Chain Attacks#
This is not an unfounded fear.
While writing this post another supply chain attack happened on NPM. Nx, a popular monorepo build tool had its NPM token stolen via a vulnerable GitHub action (looks like the Nx developers didn’t run zizmor on it). This allowed the attacker to publish malicious code to the Nx package on NPM, so users who update to the latest version will run it.
The malware was put in a postinstall script.
These are run on npm install, so you don’t even have to start your app to trigger the malware.
If this concept sounds scary, that’s because it is.
It is the ideal hook point to target developer’s machines.
By the way, other languages have this too.
For example, Rust has build scripts.
A lot of people are hung up on the involvement of vibe coding tools like Claude Code.
You can learn more about the Nx hack specifics here and read the payload code here.
But I think it might even be better if you don’t have context to do a little exercise here.
What would happen if Nx ran on Deno instead of requiring Node, so it ran inside of the security sandbox?
Think about it like this: you just ran npm install and you got these prompts which you need to authorize.
You can either approve the step and move on to the next one, or kill the install all together.
What would you do?
- Run subprocess
which claude - Run subprocess
which gemini - Run subprocess
which q - Run subprocess
claude --dangerously-skip-permissions -p "Recursively search local paths..." - Run subprocess
which gh - Run subprocess
gh auth token - Run subprocess
which npm - Run subprocess
npm whoami - Read environment variable
HOME - Read file
$HOME/.npmrc - Write file
$HOME/.bashrc - Write file
$HOME/.zshrc - Read file
/tmp/inventory.txt - Read file
$HOME/projects/webapp/.env - Read file
$HOME/.ssh/id_rsa - Connect to
github.com
In this case, if you killed it before the last step of “Connect to github.com” then congratulations!
Nothing bad would have happened to you.
Well, whenever you start a new shell your computer will shutdown, but that is a minor inconvenience and trivially rectifiable.
To me, there are so many red flags here.
A postinstall hook should never run an AI chat, it should never read my GitHub auth token, it should never write to my .bashrc, and it should never try to read secrets from unrelated directories.
Admitately this particular malware is extremely ameture.
It could have been much more stealthy, try to establish persistence, etc.
If I ran into this in the wild and had this interactive security sandbox session, I am pretty confident I could avoid getting compromised.
Docker#
Deployment solutions for production do a pretty good job of isolation. Only the required files and environment variables are built into a docker image for production. So there are no “additional secrets” to leak.
Of course, a compromised binary in production will have access to customer data, so it is important to use the principle of least privilege for database and API access given to the program. An interactive sandbox does not solve this part of supply chain security, but I think it is still valuable to give developers tools to protect their own machine at least. A compromised developer machine is likely worse anyways, as developers typically have more permissive access to databases and have secrets lying around.
Some people use docker to isolate their development environment too, for example with devcontainers.
This is a potential alternative to an interactive sandbox.
For me, I like to have my own environment set up with tons of little tools.
Am I really going to shuffle my configurations into the container and install helper tools like jq into every development container?
Developing inside a container is just not fun to me.
Standard Libraries#
The reason we embrace adding other people’s code into our programs in the first place is because it is just plain convienient. Personally speaking I hope to never implement SMTP to send emails from my app myself. After all, if someone has already gone through the headache of writing a spec compliant library, testing it, and making it secure, why should I? If everyone owned their own bespoke libraries for everything, we’d be wasting human effort and be way less productive.
I think a strong standard library solves many of these problems.
There is no world in which isNaN should be a package.
JS is perhaps the worst offender in this regard, but languages like Rust also have large dependency graph problems.
Language designers are usually more well known and a trusted entity (at least more so than a random GitHub account with no email), making them suitable owners of basic functionality.
In my opinion, the line for what a standard library should implement is something like boring and trivial code. A non exhaustive list of these would be something like: file I/O, string operations, threading, data marshalling (json, csv, yaml, toml), network requests (http(s)), logging, command line argument parsing, byte manipulation (sha256, base64), compression (zlib, gzip, etc), cryptography.
I would note that I don’t think these have to be particularly good, they should just be good enough for basic tasks.
For example, the flag package in Golang does argument parsing, but it doesn’t do subcommands and only supports single - flags and not -- flags for some reason.
But sometimes that’s all you need.
If you need more advanced features like what I mentioned before, or shell autocomplete or whatever, it’s time to reach out to a third party package.
And that is totally okay!
It doesn’t mean that flag is useless at all.
Of course there are bonus points for a more comprehensive standard library.
For example, Python has sqlite3 and difflib.
And Golang has an HTTP server/router with net/http, and even DWARF parsing with debug/dwarf!
By the way, the Deno team has written a standard library for JS available on the JSR that has string manipulation, csv parsing, etc. And hosting a webserver is a part of the Deno runtime.
Objective#
Asking an entire ecosystem to adopt a more comprehensive standard library is a little too much for making a practical difference today. And it still doesn’t solve the original problem of third party code being potentially unsafe. Why can’t we have Deno’s permission system on any arbitrary program running on my system?
Deno is of course a JavaScript runtime.
It takes a JavaScript engine (V8), and adds I/O wrappers to it.
This is why they have a Deno.writeFile for example, while Node has fs.writeFile.
Obviously “native” JavaScript has no way to write files, it is designed to be highly sandboxed.
That’s why runtimes add their own API layer to do I/O and other “unsafe” things.
To say that Deno is a Node compatible runtime is to say that Deno also implements all the Node APIs such that JS code written for Node also happens to work in Deno.
They’ve just implemented their I/O layer to have a permission system where before it executes an action, it prompts the user (or allows it if the user set a flag to allow it on the runtime before starting execution).
The third party code in this case is all JS, which lives inside of this secure interpreter.
Other interpreters like Node or Python could implement the same thing if they wanted to.
But for any arbitrary program it is pretty tricky.
After some researching, it looks like no tool exists already which can do this.
Which means it’s time to roll up my sleeves and see how I can do this myself.
I want a tool which protects developers from all the dependencies they constantly download.
I want a tool so I can run curl https://example.com/install.sh | bash safe by approving the side effects as it runs.
This is the story of how I implemented cordon, an experimental interactive security sandbox for arbitrary programs on Linux.
