Secure by Design: Cybersecurity is Important for Developers

So I was briefly involved in some project that was probably built back in the early days of the web – back when most developers wrote vanilla PHP scripts and thought JQuery was cool, before the enlightened era of batteries included web frameworks like Rails or Laravel. I was faced with security practices so ancient or nonexistent that it was arduous to hammer down all the loopholes – it was like a hopeless game of whack-a-mole.

This convinced me that the best way to secure a system is to start early: right from the systems design itself, bubbling up to its individual components; the system should be secure by design. This is only possible if architects and developers are cognizant of security vulnerabilities. At the very least, we can steer clear of obvious amateur mistakes. And so begins my journey to dig a bit deeper into the cybersecurity world.

Björn Kimminich’s Juice Shop

This is where I stumbled upon the benevolent OWASP initiative, which is on a noble mission to improve software security through open-source initiatives and community education (yes, I just copied that from their website). I spent a month or so fiddling with their Juice Shop project, which is a super advanced e-commerce app/pedagogical tool that is loaded with insecure code and bad practices. Most of the information here is distilled from what I learnt from trying out the challenges and reading their source code.

Screenshot of the Juice Shop web app

Even if what you intend to do career-wise is build cool stuff, and even if most web frameworks would already have best practices bootstrapped, I highly recommend the Juice Shop. I have really nothing more than praise for Björn Kimminich’s work and the community behind it. I love how hands-on it is in showing first-hand how perfectly fine written code can hide exploits when you don’t know what to be looking out for.

A General Guide for Secure Design

There are a lot of general guidelines out there, but if I sum things up to one piece of advice it is this: your system is only as secure as its weakest link. It only takes a silly bug or some innocent negligence to introduce an attack vector.

It is important to note that secure design is very hard. More than just memorizing a list of possible exploits, a developer needs solid fundamentals in operating systems, networking, databases, cryptography etc to understand how a hacker can exploit this knowledge in creative ways to hack a system. It is challenging enough for developers to track the plethora of knowledge just to build a usable system to get their million-dollar startup off the ground, and often times security becomes an afterthought in the process. As such, I do not imply from these general outlines that secure design is trivial, but that they are important nonetheless.

I can’t say I’m an expert; it’s obvious to me that what I learned is simply the tip of the iceberg. In fact, I take very little interest in actually becoming a security expert or pen tester. But I hope my points here will be of use to the dear reader.

#1: Never trust user-generated input.

Assume that any user input can come from a skilled hacker with an innate understanding of how your system works. Always question whether the data you are receiving can be tampered or forged. It does not matter whether it’s the date time, HTTP header, user agent: if it is from a client, it can be tampered. Do not depend on these values in any serious application logic. Assume the client application can be compromised. Assume the user will do absolutely everything but how you intended your service to be used. Use strict input validation (like JSON schema) to limit API requests to only what is necessary and sanitise anything that can be vulnerable to injection attacks. Pay close attention when any part of the user input will be sent to an interpreter (e.g. SQL query, javascript eval, template engines). Any time an application uses an interpreter of any type there is a danger of introducing an injection vulnerability (Case study: Node.js VM context is a weak sandbox)

#2: Anything exposed to the public will be exploited.

If an endpoint is meant for development purposes, don’t assume somebody will not find it just because it is not accessible from the main site or is not documented. Don’t assume that just because a secret public endpoint is only known to your engineering team, it is safe. On a related note, you must make sure you do not display debugging information like stack trace or database error messages in your production environment. It displays a lot of information to help in debugging, but it also displays a lot of information for a hacker to figure out how to exploit your system. If an endpoint is meant for internal use (like Prometheus‘s /metrics), make sure it cannot be accessed outside of the internal network. Assume hackers are clairvoyant! If it is out there, he will find it!

#3: Scrutinize your tech stack and application dependencies.

What complicates security is that vulnerabilities can be very specific to the OS, platform, library, tool, software package, and programming language you use. It is practically impossible to memorise all possible attack vectors specific to every stack, but at the very least, do a quick google search to see what are known vulnerabilities of every component of your tech stack. For example, if you are going to use XML to store data, be mindful of XXE exploits, or if you are going to process zip files from users, take note of zip slip. Known vulnerabilities are typically publicly documented in the main documentation itself, marked visibly for all to see. For example, in the PHP manual for the mysqli_real_query function:

In any case, as a developer, you should read the main documentation (instead of just copypasta from someone’s blog), cause hackers also read them!

In addition, just as important as the code you write is the code you didn’t write but depend on. For sensitive code that does authentication, you want to pay close attention (e.g. like your JWT library), but really any one of your dependencies can be a potential attack vector. If any one of your application dependencies is compromised, an attacker can gain access to your system. A good example of this is the eslint-scope incident, where an attacker gained access to a maintainer’s account and published a modified version of the eslint-scope NPM library that attempts to exfiltrate a victim’s npm account credentials. Check your dependencies for typosquatting, and ensure the library authors are trustworthy (you can use tools like Snyk to automate this process).

#4: Limit service access as much as possible.

In the industry, this practice is often referred to as the principle of least privilege. If your service does not need to access a database, do not make it accessible to that database. Consider the event in which an attacker can execute arbitrary commands (via remote code execution or RCE) to your compromised service, and what files or endpoints be exposed to the attacker. Be particularly careful when you use the same server to run multiple services or store data. In general, for production environments, it is a good idea to use a separate server or VM for the database.

Keep your network as heavily firewalled as possible and monitor any unusual ingress egress requests (you could use a service like CrowdSec). Traditionally you could compartmentalize services by segregating your internal network into their respective subnets, but in containerized environments like k8s, internal networks can be declaratively created by the container orchestrator. Never run your service as a root user. Consider creating a unique user for the service only; in the event an attacker can perform RCE, its access is limited. Containerised environments like Docker provide a good sandbox to prevent an attacker from accessing sensitive files (/etc/passwd, private keys etc) in the host server.

#5: Read up on development best practices related to security.

It is a good idea to be mindful of well-known attack vectors, and existing solutions to mitigate them. Be mindful of where you get your sources. Don’t just take any blogger’s word (like mine) as gospel. A good place to start is OWASP cheat sheet. If you are active on YouTube, I suggest checking out PwnFunction and Liveoverflow.

Epilogue

Now, though this post talks about software development, cybersecurity is more than that. My point is to emphasise the importance of cybersecurity in software development, which is often overlooked especially in the early stages of development. In truth, cybersecurity extends beyond the systems you design: often times people become the weak link, but this is not the focus of this post.

Cybersecurity is an ongoing process. It is never a one-off where your system is henceforth secure. Yes, a solid foundation and clean architecture go a long way, but vulnerabilities can easily be introduced by an innocuous change at a later date. Similarly, your education on cybersecurity (and really computer science in general) is never a one-off process. New vulnerabilities rear their ugly heads from time to time, and hackers always find ingenious exploits in ostensibly secure systems; it will truly surprise you how smart hackers are.

Naturally, it is difficult to keep up with all the news of current vulnerabilities, and even stringent code reviews may not suffice. In addition, it can be a conflict of interest to both push out new features on time and consider all the ways an attacker can mess with them. Thus, development with good security practices in mind is not a substitute for having periodic security audits, a security team, or bounty programmes.