Serverless PHP - The Native Way

I got a bit triggered by a different post which claimed that most efforts to provide serverless support for PHP focus on AWS and that AWS already has native support for PHP.

Technically neither statement is true.

Firstly, AWS does not have native support for PHP. In the serverless world, having support for a language means that given a function written in that language, all it takes is an invocation method (HTTP call, queue trigger, etc) to run it. When you go to the AWS Lambda and start a function, you should be able to select a language, write a function and run it. It means that you provide the code, AWS provides the runtime to execute it and it just works.

AWS doesn't provide that for PHP. What AWS provides is a way to define a custom runtime under which you can execute your code. This means you need to provide not just a function, but all the bootstrapping required to run your code and define the runtime (in this case, a PHP interpreter) to execute it. Alternatively, you can provide a Docker image for the same purpose.

The Issue

The issue with the AWS approach to PHP serverless is that the benefit is not worth the effort, given that you're setting up within a specific environment. If you have time, go through the steps listed on the AWS link and then consider this:

Even within AWS you could simply setup an EKS cluster (AWS has finally improved their tooling enough to have a production cluster with sensible security defaults up in minutes) and deploy something like OpenFAAS which allow you to deploy serverless functions in a variety of languages under optimised containers. Which is what you get using the second solution for Lambda on PHP.

But the benefit of having an EKS cluster is that you could do serverless functions via OpenFAAS (or other similar serverless environments - there's also Apache OpenWhisk, I should do a piece comparing them) AND deploy server-full(? I guess) applications in a Kubernetes cluster and benefit from goodies like service mesh, autoscaling and so on.

But wait! There's more! A Kubernetes setup is also portable! Aside from the creation of the cluster itself, if tomorrow you decide to say "buh bye AWS, hello Azure!" you can take the setup with you and be up and running in minutes.

What GCP provides

Opposite that, Google Cloud Functions for PHP provide everything, including the boostrap. This means you can write a simple function that takes a PSR-compatible RequestInterface and return a string response.

That's right, sadly at the moment you must return a plain string though the plan is to be able to return a ResponseInterface eventually.

Beyond that, thins are pretty straightforward. You can package your code in a zip, together with a vendor folder for dependencies. You whip up a script with various functions and as long as a function meets the criterion of taking a single RequestInterface, it can be selected as an entrypoint.

A pleasant surprise is that the performance for a fairly basic PHP function (which reads an environment variable, decodes a JSON request and responds with the env value + some string from the JSON request back) is quite on par with most other offerings on Cloud Functions (tested against Node and Go). Roughly ~100ms warm calls, with the main difference than on a cold call this plain PHP function took a bit longer (~1.4s vs 0.6s).

Don't serverless and framework

As long as you don't aim to use a serverless function as a poor man's webserver and throw a framework with complex routing behind it, this should be a decent way to supplement capacity on an API.

Why isn't it a great way to use a PHP framework on a serverless functions platform?

It's because the entire runtime goes up and eventually goes away after the function has ended. When a request is fired, on the platform side this is (roughly) what happens:

  • the runtime is provisioned
  • the function is bootstrapped
  • the function runs (and, for interpreted languages, custom dependencies are loaded).

Now, PHP can and does make use of bytecode cache between successive executions (the function remains "warm" for a while) but eventually the runtime gets "destroyed".

Now, imagine using a framework. On each execution, the framework must be boostrapped with its dependencies. Since it belongs to the function (as an entrypoint), this must be done within the function itself. You will load the configuration manager, DI container, router, configuration files and whatnot the framework needs.

On a regular server, things would get cached in a locally persisted bytecode cache (opcache). As you call various endpoints with various parameters, the things needed by each call would make their way into that cache.

On serverless executions, to add insult to injury, not only would you need to warm up the function initially, but if you have different routes managed by the function via a framework router, each route would need slightly different code so even between successive calls you'd have situations when the dependencies for that particular route are not "warm" ... so technically it's a bunch of "cold" requests resulting in (somewhat) lower performance.

23