I’m bootstrapping a small product. My team likes to believe that this is a high risk / high reward situation. One of our priorities is to be flexible and quickly deliver value to our clients. This isn’t unusual for startups* at this stage. To achieve this, we sacrifice some of the qualities a more mature organization might pay more attention to.
* Yeah, Docplanner is hardly a startup anymore, but either way, we’d like to operate as one.
One of them is — unfortunately — security. We can’t afford a complicated process of security audits, pentests, manual tests, etc. But this doesn’t mean we ignore this field completely. To gain maximum value for minimal effort, I needed to create an environment that encourages creating safe code.
Secure by default
I think two main factors come into play if you’d like to create secure applications (that is if automated tooling and a dedicated security team is out of the question):
- Development team needs to be security-aware. This doesn’t mean that everyone has to be an expert, but being aware of the common attack vectors and vulnerabilities allows them to better recognize risks and escalate problems — even if they don’t have the know-how to implement safeguards themselves.
- Secure defaults. Create solutions, architecture, and tooling that make it hard to make a mistake. A great example of this is auto-escaping in the view layer (be it Twig or Vue.js). You have to put in some effort to output without escaping: by using
v-html
or|raw
(btw,dangerouslySetInnerHTML
feels even better). In general, it’s advised to make use of the security features of your framework.
Improving your security practices over time
At any given point in time, you have a certain level of security practices in your team. I’d guess it’s probably in the low end unless you do have those security enthusiasts on board. The natural motivation is to rise it over time. How to do it? Add it to your routines and make a habbit out of it:
- Do you run retrospectives? Make sure to mention some security issues.
- You probably review your code, but do you have a security checklist? Create one and add it to the PR template, so your team can pick it up (and actually have to put in an effort to ignore security related topics).
- Have you spotted an issue? Make sure it never happens again by introducing some secure defaults. Prefixing a class
InsecureUseWithCaution
is a decent start. - Creating a wiki of your security findings and best practices will help developers joining your team leveling up their security know-how to your level and avoid mistakes once already solved.
Some examples from my recent history
Yeah, I follow my own advice. To some extent at least. Here are some issues that were raised during the development of my product lately.
Avatar uploading allowed sensible data disclosure
The feature to allow uploading user avatars was already using a bunch of secure-by-default tooling. Symfony forms had a whitelist approach, there were some validators in place to enforce business rules, API response serialization was handled by a library instead of manual labor (so we wouldn’t run into some escaping issues, etc) and it also was using exclude by default policy.
But due to the system’s architecture, API expected a file_id
field (instead of a URL or binary file contents). Unfortunately, the application failed to verify that the user has the authorization to access the given record. This potentially allowed any customer to enumerate the whole database (identifiers were numeric and sequential) and gain access to files they weren’t authorized to (by inspecting their avatar_url
response).
This was caught during code review, and to avoid similar mistakes in the future, we created a dedicated Symfony FormType
for file input, by default restricting the allowed values to files owned by the authenticated user. Some other options were considered, but not implemented due to time/effort constraints:
- Replacing numeric ids with UUIDs, making it borderline impossible to enumerate the database this way
- Stop relying on file URL being secret and serve sensitive uploads using Signed URLs
- Separate sensitive content from public content (like user avatars) and store it in different database tables
Trusting API responses to be safe for any context
The search widget was highlighting the search term within the results by wrapping it in some <em>
tags. To do this a simple replace was used and the result was used in the Vue’s v-html
directive, which obviously skips escaping. This was thought to be safe by the developer because the data they were operating on came from a trusted backend API.
Without going into the topic of trust too much: it’s a spectrum, not a binary option. So even if you trust the API in general (it doesn’t have the intention to harm you), this doesn’t mean you can use all of its data safely in any context. By following another rule — filter input, escape output — the best place for escaping is as near the output as possible, since it’s the only way to decide the correct escaping method for a given context.
In this particular case, the data was provided by:
- Other users from within the same organization, but despite that, we shouldn’t rely our security on the fact, that users within the same company aren’t hostile towards each other
- There was a feature of synchronizing some of this data with 3rd party providers, so there aren’t any guarantees what it contains
- Once the feature was developed, even if the data could be trusted, it could be easily extended in the future to other use cases and adopted without a second thought on user-provided data. In this case, we wanted to avoid adding an insecure by default feature
The issue was an opportunity to talk more about the above-mentioned topics during the code review (the code never went to production).
Server side request forgery
Some API endpoint of the application was fetching data from user-provided URL, storing its contents in an S3 bucket, and returning its address. It was actually pointed out by Agnieszka, a member of Docplanner’s security team, that there is a potential SSRF vulnerability there: if the user provided a local reference instead of an URL (e.g. ../../app/config.php
).
In practice, the endpoint was secured and only used by a trusted 3rd party vendor, so there was no actual risk right there. And this feedback could have been easily dismissed if it weren’t for the secure defaults philosophy. Instead, the service responsible for this business logic now required an instance of UnsafeUrl
(instead of a simple scalar), and the developer needs to go through some hoops to instantiate it, for example by using this factory method:
This way the code was fixed before it even became an issue. And a couple of weeks later, this same code was used for (untrusted) user uploads. And since the implementation required a whitelist approach, the introduction of an SSRF vulnerability was prevented without any manual security review.