If you have ever tried to secure a development environment, you know that it is an incredibly challenging task. Developers often need administrative privileges in order to install and configure a development environment. Granting administrative privileges to developers opens the door to a number of threats, including running unwanted software with elevated privileges.
We can prevent unwanted software from executing. The best tool for this is to adopt application whitelisting controls, one of the most popular being Carbon Black Protection. Using a whitelist approach, only the binaries that you have pre-approved are allowed to execute. This approach requires a significant investment, but I know of no better, more effective alternative.
But software developers produce binaries that an administrator has not yet approved. So the question is, can you secure a development environment by utilizing application whitelisting controls? The answer might surprise you. Before I get into a proposed solution, I want to review a few less effective approaches.
Less Effective Approaches
The first option to secure a development environment is to grant the developer the privilege and ability to approve binaries to execute on his/her local machine. Although this option does give the developer a way out, this is less than satisfactory and leads to several consequences.
First, if you grant a developer the privilege to approve a binary, the developer can locally approve any and all binaries that aren’t explicitly blacklisted. This pretty much negates the benefits of whitelisting.
Secondly, the manual process prevents automated unit tests from executing during development. A developer needs to have the ability to execute unit tests quickly in response to code changes. Some IDEs will even automatically execute affected unit tests when code changes are being made. This simply isn’t feasible, because the developer will need to manually approve the binary upon every build. Test Driven Development (TDD) will essentially become impossible using this technique.
Thirdly, a single change to a source file will require the approval process to run again. This will become frustrating to the developer and will allow them to become jaded to the local approval process. They will become more likely to approve a binary that shouldn’t be approved.
Lastly, locally approving a binary does not allow it to run on another machine. Although this is definitely a pro of application whitelisting, it is also a consequence. There must be some involvement from an individual that has the privilege of whitelisting the binary on an enterprise scale.
Local Development Folder
Instead of allowing a developer to approve every single binary, you could allow them to approve only binaries in a certain folder. If your development team has standardized a local development folder, such as C:\Development\ or /development/, then you could automatically approve binaries in those folders.
Although this would allow for a developer to maintain the productivity s/he would enjoy outside of a whitelisted environment, this makeshift sandbox is easy to escape. Developers could easily install any software to this directory and it would fly under the radar. Targeted malware could easily take advantage of this loophole.
The next option to secure a development environment is to automatically approve software that has been digitally signed by a developer. You can grant all developers a digital certificate that they can use to digitally sign their code. Once signed, the binary can execute on any machine in the enterprise. This is a slightly better approach than the previous, but still has dire consequences.
The first major issue with this approach is that any assembly can be signed, as long as the developer has an approved digital certificate. This means that either a rogue process or a rogue developer can effectively bypass application whitelisting controls. Even worse, depending on your configuration, all signed assemblies have the potential to work across the entire enterprise. Code-signed ransomware sounds like a fun attack vector.
The second issue with this approach is the additional management of all code signing certificates and all signed binaries. I have seen binaries deployed to production signed with a developer’s certificate who is no longer employed by the company. Worse, the organization was unable to revoke the certificate, because s/he signed hundreds of binaries that were deployed to countless places in production. Certificates are going to expire and they are going to become compromised. You need to be able to revoke them and know what the consequences will be.
Third, It may or may not be possible to configure the developer’s IDE to debug or execute the unit tests against digitally signed assemblies. Most IDE’s will give you execution hooks, but sometimes the necessary hooks aren’t there.
A Better Approach
Develop Inside a Container
There is a better way to secure a development environment that seems to be gaining ground. A developer can perform all development tasks from inside a container. A developer can pull down a Docker container with all development dependencies and then mount a volume containing all source code. The developer can then send messages to the container to compile, execute unit tests, or run the application.
I feel that this is the best approach for a number of reasons. First, it prevents a binary from executing outside of the docker container. Containers are isolated in nature and can be further locked down and restricted to ensure that communications only happen via well-defined mechanisms. A rogue binary could very well execute in the container, but much of its damage will be mitigated.
Secondly, developing inside a docker container allows a development team to standardize their toolset and onboard a new developer with a single command line. No more “it compiles on my machine” excuses because someone has the wrong version of Sqlite or Nodejs installed. If it works in the image, it will work for everyone that pulls that image.
Finally, and this should go without saying, developing inside a docker container will allow you to more easily make the transition from development to production. This really is what Docker is all about – being able to create an image and drop it anywhere knowing that it will work.
VS Code Remote Container Development
Developing inside a container is not a new idea. In fact, those outside of the Microsoft community have been doing it quite a while. However, running a GUI inside a Docker container has been impossible in a Windows environment (I haven’t seen a working example of anything other than a blatant hack).
Microsoft recognized this inherent weakness and decided to attack the problem from a different angle. Using the Remote Development extension in Visual Studio Code, Microsoft has chosen to give a developer a “local-quality development experience” while pushing the burden of compilation and execution to the container.
This feature is currently only available in the Visual Studio Code Insiders edition, but I must say that I was blown away by the experience. I ran into a few issues with the tooling, but I am quite confident that Microsoft and the VS Code community will work out all of the kinks.
I long for this feature to be in full-fledged Visual Studio. Regardless, Remote development inside a Docker Container from VS Code is a huge step in the right direction. It offers a sandboxed development environment, it prevents the need for awkward, custom approval processes during development, it grants the developer a seamless experience, it gives a development team a standard development environment, and it makes it easier to deploy the software product.