I’m tackling quite a few APIs lately. Everything is REST, and there are various official and unofficial libraries for them. But I usually find the libraries are not as well documented as the service’s own REST documentation, and that calls generally require many includes and objects instantiating.


As an example, SendInBlue (an email marketing platform). They have clear API docs https://developers.sendinblue.com/reference which clearly define every call. They also have a PHP library https://github.com/sendinblue/APIv3-php-library. After some wrestling and probing the docs, it works a bit like this (paraphrasing my code a bit so it may have typos):
It’s a challenge to find all the relevant objects and functions; after doing this a few times I just wrote a very simple wrapper that takes data and calls via CURL, according to the API docs:
I’m finding the same for other APIs I’m working with (eg Quickbooks, Xero, and others). Using a simple curl wrapper means I can make whatever REST calls I need without having to figure out the library’s object model every time. The ‘right’ way seems more complicated – am I just not smart enough?
There are pros and cons with both variations.
The benefits of using a well-supported client library is that it will support autocompletion in your IDE, tells you when you’ve used the wrong key, gives you a heads up that their API is changing when regularly updated, and potentially makes it obvious how to make things more modular to work with (e.g. the Config can be put in a single method that gets re-used internally). If you’re not using an IDE with autocompletion support, though, then yeah, a lot of the benefits start to feel like problems (you only find out about typos at run time [same with JSON REST], a lot more typing for name spaces, manually looking at class definitions or documentation, etc).
The benefits of using the array method is that it’s quicker to get out the door and it looks closer aligned to what the raw JSON would be in examples. The downside is it’s less modular, there’s no checks that you’re sending the right values in the right places, and if the API changes you have to figure out how on your own.
In either method, though, you should still create that MyAppMySib library wrapper, whether you’re using manually encoded JSON REST calls or using a third party library. This way when their API does change, you primarily only have to change the code in that wrapper to support renamed or moved keys, value formatting differences, etc.
This would be my preferred way of formatting their API call:
[as you can see I also prefer lowerCamelCase variables, heh]
Another way to do it:
“lowerCamelCase” is a funny one. Last I checked a camel’s humps aren’t peaking at its behind. 🙂
camelCase, PascalCase, kebab-case, snake_case and CONST_CASE.
Experience: working over half a decade on a product which consumes a LOT of APIs (Facebook, Twitter, Youtube, Google; but also e-commerce only ones).
In practice it does sound useful, especially for special cases:
automatic handling of refresh tokens
specific nuance like Facebook where, if you upload a video you need to actually use a different host which their SDK takes care of transparently
However given this context and in hindsight, in 99% API/vendor specific clients were never useful:
every vendor has a different idea how to do the abstraction, this creates friction when working on/with them
some rely on auto-generate runtime code/dynamic call, which doesn’t help with IDEs or static analyzers
some use mutable/stateful abstractions to formulate requests, just creating problems


every vendor has a different idea whether they accept external low-level clients or not. Some did not allow a custom http client, some required Guzzle, some httpplug; it just got more diverse
those libraries had composer dependencies, which over time start to create version conflicts due to the vendor (or 3rd party) neglecting them. Then you need start chasing PRs, hoping for timely merge or just use temporary forks, again creating friction.
In my experience, when using many different APIs, the cons absolutely outweigh the benefits.
Additionally, there were technical requirements for the product:
logging; not just raw logging but also with per-call dependent context. Nigh impossible in some SDK abstractions, had to extend override n+1 classes and wrap code to make this happen. Then it turns out it obviously wasn’t meant that way by the vendor, breaking these things in release
automatic retry (rarely libs did this themselves and often it wasn’t just based on status codes but also required payload inspection to make such decisions)
detect invalid tokens
allow mocking on the lowest level for writing tests
Further, except in rare cases, if you consider that most calls are a very simplistic form of:
produce the required parameters (GET/POST)
send to the specific endpoints which right headers / tokens
decode JSON
So much “foreign overhead” just wasn’t worth it.
And some SDKs would return raw JSON, others decoded JSON, others their own DTOs but without proper validation and incredible magic, some were those DTOs actually had references to the client, etc. It’s such a mess.
However brilliant some platforms may be, often I had the feeling that the SDKs were written by a B-team and proper maintaining that code was an afterthought.
In a nutshell, coming up with our own (code) infrastructure allowed us to have less friction maintaining those APIs and just move forward over the years. And it’s also less mental overhead for developers:
all external API access uses the same patterns
it’s the same concept how the HTTP client is used
logging/retry middleware was always the same concept just with slight adaptions for nuances of their APIs
mocking was the same for all. That was especially good as not to have 100+ ways to think about how to mock a certain API. Once you figured it out for a few APIs, re-usability of the pattern was high and productivity better
And it’s also less mental overhead for developers: – all external API access uses the same patterns – it’s the same concept how the HTTP client is used
Thanks for your input – and definitely this – at least for me! I’m probably older than most here and maybe have less capacity for mental overhead 😛 I can refer to the REST docs – if they say POST data X to endpoint Y, than I know I need to prepare X as per the example – then (in one line of code) post it to Y.
I prefer just using cURL. I’m told by so many devs that just means I haven’t given Guzzle a good enough test run.
After working with Guzzle on quite a few projects, I still prefer PHP’s cURL lib.
It’s easier for me when I switch between projects to just have a consistent native function.
At least it’s not just me… and your username fills me with confidence 😛
I dislike how much boilerplate code I had to constantly write when using curl and guzzle. So I created phpexperts/rest-speaker.
Now all I do is:


$response is a standard JSON PHP object.
I personally think that the choice of whether or not to use a library for HTTP requests depends on how potent the library is relative to a non-packaged version as well as a programmer’s amenability to libraries such as the one that features in your snippet. Some people have a preference for using libraries that is almost second-nature for a multitude of reasons – and they’re not wrong. I don’t think you’re dumb for not using a package for API calls – you’re just someone solving a problem differently.
Depends if i can get the request and response from the library or not, plenty of libs that just throw an exception with no way of logging the request and response
I’m coming around to being a superfan of GraphQL myself, since it tends to push more logic server-side such that you can use custom queries and mutations that would otherwise have to be implemented in a library. Still prefer to wrap it in a library of course — separating concerns is still good — but the library often ends up much thinner as a result.
Plus there’s stuff like Symfony API Platform that makes it so REST and GraphQL doesn’t have to be an either-or choice. API Platform is so good it’s bonkers, it’s what keeps me in PHP these days.
I have had this negative experience with quite a many supposed “API clients”, like NeverBounce’s. By the time I tried to implement NeverBounce’s Official PHP Client, I had to cross reference thier API docs so often and reproduce so much low-level code, I felt like I hadn’t saved ANY effrot at all.
So instead, I created phpexperts/neverbounce. I won’t link to it here, because the mods of r/PHP recently banned me for 3 months for “self-promotion”, even tho all I do is try to help people like you out with snippets of my own code. I find that incredibly unjust, to be honest. But what have you?
Anyway, if you go to the project’s GitHub page (which, again, I can’t link in this comment because it’s considered “self-promotion”), you’ll see that to implement MY client is as easy as:
The bulk email process is equally easy and straight forward, but requires a certain amount of async (basically, timed retry intervals)
But my point is that as a superior software architect, I utilized Encapsulation of the API itself, so that the end-developer DOES NOT HAVE TO LOOK AT THE API DOCS AT ALL. And I made it damn simple.
I’ve done this for several APIs in recent years, particularly my ZuoraAPIClient (75% of their API) and SalesforceAPIClient (~20%).
Let’s see how to do it via the official NeverBounce PHP API Client:
Because I made it so that the API does all the heavy lifting, and exposes itself via DTOs in both Request and Response objects, it’s much much easier to use and doesn’t require you looking at the NeverBounce API docs at all, except to find out how to get API keys.
[1] https://github.com/NeverBounce/NeverBounceAPI-PHP/blob/master/src/Object/VerificationObject.php
Members
Online

source