The SNIA Cloud Object Storage Test Tools Technical Work Group (TWG) is dedicated to researching, developing, and publishing interoperability test software and methods to expand and enhance the technical compatibility of the S3 ecosystem, including basic cross-vendor interoperability for cloud, on-prem, hybrid, and multi-cloud deployments. The SNIA team driving this initiative recently hosted a live webinar, “SNIA Cloud Object Storage Test Tools (Open-Source Industry Interoperability Project),” where they detailed the efforts of this multi-vendor community. If you missed the presentation, it’s available on-demand along with the slides at the SNIA Educational Library

The audience asked several questions, and as promised, we’ve answered them here. 

Q: Is Amazon participating in this effort, and how would they benefit?

A: Adam: We believe that SNIA’s efforts share the same customer excellence goals as AWS. We've invited them to participate in the SNIA Cloud Object Storage Plugfest and the SNIA Cloud Object Storage Test Tools TWG. We hope to see them potentially attend SDC this year and establish a line of communication throughout the industry. We recognize that we all have to partner together. 

Customers expect excellence across the ecosystem. We need to get support from providers, vendors, and those developing SDKs or other additional open-source tools. Amazon obviously does a great job. I've been working on S3 for almost 15 years and we want to contribute back, especially in areas where we think we can fill the gaps, and make AWS customers happier. Interoperability helps the ecosystem that Amazon founded with AWS objects, even richer and easier for customers to grow. We would be excited for them to join us.   

Q: Are there any aspects of the tests that examine the security of implementations? 

A: Adam: We did state that there is the interoperability focus, but I think both Matt and Ric can address this question in terms of how their teams deal with security in their products.

Matt: Well, I can speak to the S3 test. That is really about fidelity to the LUS request signing and other security infrastructure and rules as well as at rest and on the wire encryption. So, it's a matter of API conformance. We don't provide tests that introspect beyond that. 

Ric: To build on what Matt says, behind the wall of the API, there's how you implemented your server. you're still responsible as an engineering team to go through all the normal security validation and testing that's very implementation specific. So, I think as Matt says, validate that you implemented the API, the right security protocols, and support everything required. And second job, which isn't part of this, is to validate that you don't have any major faux pas with your implementation security wise. 

Matt: That's right. Include the APIs use of TLS that include a request signing regime enable USB v4 signatures that has known security properties and uses known message authentication digests. There's an encryption set of encryption models that are supported within the S3 in the SSC system. we can test suites validate those as well. 

Adam: I want to add on to that. I think the AWS team has done a fairly excellent job from an API perspective on the security focus. Obviously, there are some use cases that might require more secure encryption methods that may not be available as a default and I'm sure that other vendors have additional ones. There's also the difficult question, especially from a cloud provider, of where does storage end and other things begin. If you're using something like server-side encryption, where the keys are provided by some sort of other security service, that is obviously a point where security is a massive implication and from the API's perspective it's fine. It's always on the implementer’s mind to make sure that there's not something they're missing. Since AWS is a de facto API, we have to poke around in a lot of black box testing, trying to find the edges of things. And a big concern for me is ensuring that you're not incidentally leaking information that might be valuable to an attacker through error messages through logging those kinds of things. That is a diverse part of how different vendors might handle things. You can return the right error code, but some may be more descriptive than others. Some may adhere exactly to what Amazon returns. 

Another security aspect, and this is just the reality of the situation, is that the S3 API is very stable, but it is based on XML and those XML libraries are constantly being in flux. There are a lot of CVs for different XML things and that's something that you have to follow as an implementor. We hope that with AWS participation in the community, we may be able to expand the API in new directions that might make it more secure for integrations that have not necessarily implemented an XML parser library within their software stack.  

Q: Can you elaborate more on the remote testing and how that went? Will remote testing be available at the Plugfest at SDC?

A: Adam: Ric, do you want to talk about remote testing a little more and how that helps?

Ric: We ended up having, as I mentioned earlier, the person who led our testing at the Denver Plugfest was actually doing it from Brazil and it worked quite well. A lot of the people at the Denver Plugfest did not bring servers to set up. So, you were basically testing against remote endpoints just like the people on site. You do miss the on-site discussions and the casual camaraderie that you have there in the NDA session, but you are under the same rules, which is kind of like the fight club rules. You know what goes on inside that closed room is not leaked outside. 

Slack was the tool we used to reach out to the remote person and that was pretty good. I do think it's a trade-off. It's better to be in person, but the other thing that we thought coming out of that is that remote testing enables you to do some of these runs before you get to the face-to-face. We were debating how much we can frontload some of the actual test runs so that we use our face-to-face time for more strategic discussions and kind of group debugging of runs as well. I think overall it's a really good tool to have in our toolkit to enable people to participate remotely. It also helps take the pressure off for travel. It means you can still take part even if you can't get to the event itself.

Adam: I wanted to touch on that a little more. We had a Slack. We had some open calls to have discussions, but it's not the same as being there in person. I think we all know that if you attended a conference virtually during COVID, It's not the same. We’re trying to make this a better experience and we’re learning each time from our Plugfest experiences. One of the goals of the test tools effort is to allow you to do testing on your own at any time and you'll have 90% of the coverage or whatever if we end up getting a really good suite going and then when we're in person at the Plugfest, we can talk about the hard stuff or the issues that you'd already found beforehand. For example, the kind of fuzzy edge of the S3 API and where it starts talking to other things. Where does the cloud end and where do other services begin is something that's constantly on our mind. It's not as easy as it was in the past with SMB or other kinds of storage protocols that are very contained to that surface. Now, as compute and storage start to merge, it is a difficult conversation to have. We only have three days in person and the more testing we can do beforehand the better we can use our time at the plugfest itself having those discussions both structured and kind of the “hallway track” as people say. Also, attending presentations at SDC or presenting yourself. So, remote testing is something that we're really interested in doing.

Ric: I want to highlight that the hallway track is actually one of the really valuable things of being there face-to-face. You have the actual testing, but talking to other people who have been in this S3 implementation business is incredibly interesting and useful. You learn a lot. 

Adam: Yes, I think just in this call you learn a lot. We've all been doing stuff in storage for a long time. I mentioned, 15 years of S3 pretty much S3 object storage and you find that it is a massive industry but fairly small. There are a lot of people that know each other and there's a lot of value to getting to know those people. At the Plugfest, we have those discussions, asking why did you make that decision? And sometimes the reasons may surprise you and that helps, because if you understand the why and you're testing against it, it's so much easier to think about the soft edges of things.  

Q: Who should participate in these Plugfests? 

A: That's a good question. We've talked a lot about storage from a service and a storage provider perspective, but we're really calling for anyone that touches the S3 API. That includes clients, open- source SDK developers, vendors that integrate S3 as part of their solution. We really want to know as many use cases as possible. From my perspective as a cloud provider, we don't really have the luxury of stating a workflow. We just put it out there and people connect. And so, for selfish reasons, I'm very interested in as many varieties of S3 uses as possible 

Get involved with the SNIA Cloud Object Storage Test Tools TWG and Plugfests!