During a meeting this week we discussed amongst the middleware product providers how to improve the continuous delivery of products. A lot of my concern lies around the lack of proper infrastructure tests of these products as they are deployed. Although there is testing which checks for internal consistency, there is no functional testing of the products and more importantly no independent specification how they should behave in certain scenarios.
Luckily, compliance as code has come a long way, and there are good frameworks for writing infrastructure tests, just as one would write for applications, or middleware products in this case.
There used to be a almost unanimous agreement in the means to configure services until the end of the European Middleware Initiative: YAIM. Now that EMI is over, there is a bit more freedom to choose which configuration method to use at sites. As the old saying goes though,
Freedom isn’t Free
The price we have to pay for this freedom is not just learning a new tool like or Puppet, but also a new set of norms to be able to share and re-use components, like we did with YAIM. Since there is no longer a central authority developing the configuration tool, this is going to be hard…
But does it really matter how we configure services? Isn’t all that really matters, in a functional sense, that these services are working properly? If that is the case, we can adopt Test-Driven Development for our infrastructure services and share those!
So, I wanted to follow the Test-Driven Development style to ascertain whether this would be feasible for the simpler services in the middleware stack - the ones which were pretty much standalone, and didn’t depend on complex interconnections between services, yet exposed a nontrivial endpoint. With this in mind, I chose the BDII instead of say the WN or UI and start work on an Inspec Profile for the various kinds of BDII -
Taking a step back, I assume we want the BDII to be secure and robust, beyond also doing what it’s supposed to do. We also don’t want to re-invent the , so we don’t want to do OS-hardeness tests here (we assume they are done in a different profile).
I’ve thus separated the controls into three different files:
- The BDII spec there doesn’t (yet) take into account the BDII level (site/top) - there should be specific tests depending on the scenario.
- I naively assumed that the CIS benchmark for openldap would help - but according to the OpenLDAP maintainers it doesn’t. In fact CIS no longer publishes the benchmark. So, we’d best go with what they describe in that thread…
- There are many CVEs for OpenLDAP! My first thought was to only take the advisories that the EGI CSIRT Software Vulnerability Group publishes, but there’s no way to filter or search that. So, I ended deciding to filter the OpenLDAP CVE’s by
CVSS3 >= 5(only 25 of them ) and implement controls for those You gotta start somewhere !