Protobuf in HTTP requests

Would be really good to have support of protobuf in HTTP requests - encoding/decoding protobuf body in a request/response using provided proto schema. This is something that Postman users have been asking for years, but still don’t have.

Hi, in this use case, are the .proto files only used to define the message for encoding/decoding of data, without defining any service and rpc for endpoints?

Yes, since this is HTTP I only meant encoding/decoding against provided proto schema. Currently there is a need to use external tools for encoding/decoding the messages and then storing binary data as a file to use it as a body. Having a support of this in Apidog would be so cool!

Thanks for your feedback. It would be helpful if you could share some details:

  1. What programming language and framework do you use to run this HTTP+protobuf server?
  2. Are messages defined in one .proto file or are they spread across multiple .proto files?

Well, no programming language or server is involved in my flow. Let me explain.

Some endpoints in our product send or respond with a protobuf data in the body and Content-Type as application/x-protobuf. So to send such request I have to use third party tools like CyberChef (it’s open source BTW) to encode my body using a proto schema, save it as a binary file and attach it as a body. Similar to that when some endpoint responses with the protobuf I have to save it as a binary file and decode in CyberChef using a proto schema. So it would be so convenient to have ability to define a proto schema right for the request and response (because request and response can use different schemas) and encode/decode protobuf all in Apidog. We use different schemas for different endpoints, so having them per request would work perfectly.

Please let me know if you have any questions and need any further clarifications.

Thanks for the additional information, it is clear.

Have you guys considered to implement this approach to support protobuf in HTTP requests?

This feature is already on our feature request list. However I cannot provide information on when this feature will be developed.