Connect async Rust services to FastCGI without ceremony.
Fastcgi client implemented for Rust with optional runtime support for tokio and smol. Use it for direct php-fpm calls, connection reuse, streaming responses, or HTTP-to-FastCGI proxying.
Fastcgi client implemented for Rust with optional runtime support for tokio and smol. Use it for direct php-fpm calls, connection reuse, streaming responses, or HTTP-to-FastCGI proxying.
Runtime support is feature-gated. Enable runtime-tokio, runtime-smol, or both,
depending on the executor already used by your service.
The crate supports short connection mode for straightforward request/response flows and keep-alive mode when you want to amortize connection setup across multiple FastCGI calls.
With the optional http feature, requests and responses can cross the boundary between FastCGI
types and the Rust http crate, which makes proxy-style integrations easier to build.
The simplest route is a short-lived connection to php-fpm. The request metadata mirrors the CGI-style values you already configure in web servers such as nginx.
cargo add fastcgi-client --features runtime-tokio
cargo add tokio --features full
let stream = TcpStream::connect(("127.0.0.1", 9000)).await?;
let client = Client::new_tokio(stream);
let output = client.execute_once(Request::new(params, io::empty())).await?;
The examples assume a php-fpm server listening on 127.0.0.1:9000 and repository-mounted PHP test
fixtures under tests/php. This keeps the examples close to actual deployment topology.
docker run --rm --name php-fpm -v "$PWD:$PWD" -p 9000:9000 \
php:7.1.30-fpm -c /usr/local/etc/php/php.ini-development
The repository already contains runnable examples for the main integration styles. Start with a direct request, move to keep-alive reuse, then inspect the proxy example if you need HTTP ingress.
The website stays intentionally small. For API details, feature flags, and complete example commands, use the upstream documentation and repository pages directly.