Introducing the Internet-Connected Refrigerator We used to joke that the only thing the Java Intelligent…
DevOps and SecOps are relatively new concepts. In the beginning, there were just programmers writing punch cards, or assembly code, and then later C or C++. The amount of data was tiny by today’s standards, and easily maintainable. That lasted until about… let’s say 1990.
Then, with the invention of Ethernet and the adoption of the web, computers quickly became more and more globally connected. Due mostly to email, the amount of raw data that was produced, sent, and received skyrocketed. The concerns in this space called for dedicated IT staff.
At first, IT handled security. But, as more and more people and organizations stored critically important data in this global system, hackers found increasingly sophisticated means of attack and exploitation. This “arms race” gave us the need for dedicated security professionals.
Now, we are rapidly moving into an era of The Cloud and the Internet of Things. This configuration raises the data transfer and storage needs up by an order of magnitude. This new intensity applies a tremendous pressure to a singularly skilled professional. Security and IT need to understand development concerns, Security and Development need to understand IT concerns, and finally, development and IT must understand security concerns.
This expansion of responsibilities is already well underway for developers—mostly when developers absorb some IT responsibility around deployment and use things like “infrastructure-as-code.” This practice is commonly called DevOps. When IT and security mesh, we call it SecOps.
These designation are helpful but it is time, not only for development to start assuming security-based concerns, but for every individual on the engineering team to start becoming security-minded.
Default Permit is the idea that by default any user can perform any action on a system. No part of a secure system should ever be configured in this way, but this is one policy that development ignores on a shockingly regular basis.
We see database configurations where the username/password is root / root all the time, but we also see development machines where the primary user account has full administration privileges. A lot of times, people don’t even know they’re breaking default permit. We also see this a lot in third-party development tools and platforms like GitHub—one user with a “developer” role very often has full access to the entire GitHub organization, allowing a hacker to wreak havoc on an unsuspecting organization.
Changing to a Default Deny policy will be difficult at first because it greatly impacts usability, but is very useful once adopted. This protects against many things, but unwanted privilege escalation is the greatest one.
Encryption at Rest is the idea that large blocks of data are stored in their encrypted form and decrypted when accessed. A lot of operating systems offer this as a built-in option for laptop and desktop OSs, and should be enabled there, but also in your containers, your VMs… anywhere you can use it.
This isn’t trivial to do with tools like Docker, but definitely possible and encouraged. This protects against both social engineering and data theft by making the data useless to attackers.
It’s not easy to secure a production environment. It’s even less easy to do it as an afterthought or as the last step in a production deployment. An alternative approach is to start building security into your deployment pipeline very early. If you take the time to make your development environments secure, that at least gives you a foundation to make staging secure, followed by production.
Practically speaking, there are a number of factors that go into this, but the practice should be: anything you do in production for security you should do at all deployment stages, including development. This includes any vulnerability assessments, security audits, penetration testing, and more.
The first most important thing to do with credentials like passwords and API keys is to keep track of them. Find a secure place to store the API keys that you give out to your developers on an as-needed basis. Then, rotate these keys on a regular bases and re-disseminate them to, and only to, the necessary developers.
Another best practice is to require all user passwords to change every 30, 60, or 90 days. You should be doing this with as many passwords and keys as you can. This can be cumbersome, but it will greatly reduce the risk of a lost or stolen device compromising your data security model.
There’s a great section of the OWASP Wiki that explains this. The main takeaway of the wiki article is that besides the obvious security benefit of being able to revoke keys and assign new ones, it also provides a layer of accountability and auditing to your security process—you can always know who is using what keys and when.
Continuous security is the practice of building security measures, tests, and applying security practice and methodology to your continuous deployment chain, and beyond. Another way of asking this is, are you including security auditing and monitoring in your build step and your deployment pipeline in the same way you would a linter, or a test suite?
Continuum Security and OWASP released a slide deck on this called Continuous Security Testing, which describes using Behavior Driven Development-style testing in a security context, allowing you to write and automate your security tests the same way you would any functional test, and then run them with every build.
Finally, if you are using a build server, containers, or anything that might be running on some other server elsewhere, make sure your source code doesn’t leak.
This can happen, for example, if a build server doesn’t properly clean up its build artifacts and then leaves copies of the source code accessible via a web browser. This is a vulnerability both inside of your local network and outside, as it might allow attackers to get your source without any escalated privileges.
Don’t get us wrong, developer-driven security is very, very, important. But it’s really an artifact of another extremely important principle in the age of The Cloud and the Internet of Things: that everyone should be thinking about security.
Regardless of whether you have a formal SecOps practice your DevOps simply implements ad hoc security policies, they should meet a baseline security requirement. These questions should help start you in the right direction, but ultimately your engineering teams need to define and enforce proper security protocols in DevOps.