Vert.x allows you to easily write non-blocking HTTP clients and servers.
Vert.x supports the HTTP/1.0, HTTP/1.1, HTTP/2 and HTTP/3 protocols.
The base API for HTTP is the same for HTTP/1.x, HTTP/2 and HTTP/3, specific API features are available for dealing with the protocol peculiarities.
The simplest way to create an HTTP server, using default config is as follows:
HttpServer server = vertx.createHttpServer();By default, the server supports HTTP/1, HTTP/2 in plain text and WebSocket
If you don’t want the default, a server can be configured by passing in a HttpServerConfig
instance when creating it:
HttpServerConfig config = new HttpServerConfig()
.setVersions(HttpVersion.HTTP_1_1)
.setMaxFormFields(512)
.setHttp1Config(new Http1ServerConfig()
.setMaxInitialLineLength(1024))
.setCompression(new CompressionConfig()
.setCompressionEnabled(true)
.addGzip());
HttpServer server = vertx.createHttpServer(config);Configuration addresses the following parts:
-
Accepted versions
-
Timeouts and limits
-
HTTP/1.x specific configuration
-
HTTP/2 specific configuration
-
HTTP/3 specific configuration
-
WebSocket
-
TLS
-
Transport: TCP and QUIC
-
Content compression
-
Observability
-
Network logging
Vert.x HTTP servers can be configured to use HTTPS in exactly the same way as TCP or QUIC servers.
ServerSSLOptions sslOptions = new ServerSSLOptions()
.setKeyCertOptions(
new JksOptions().
setPath("/path/to/your/server-keystore.jks").
setPassword("password-of-your-keystore")
);
HttpServerConfig config = new HttpServerConfig()
.setSsl(true);
HttpServer server = vertx.createHttpServer(config, sslOptions);You can read more about SSL server configuration
Vert.x supports HTTP/1.1 and HTTP/1.0 over plaintext and TLS.
HttpServerConfig config = new HttpServerConfig()
.setSsl(true)
.setHttp1Config(new Http1ServerConfig()
.setMaxInitialLineLength(1024));Http1ServerConfig captures the configuration of HTTP/1.x specific aspects.
Vert.x supports HTTP/2 over TLS h2 and over TCP h2c.
-
h2identifies the HTTP/2 protocol when used over TLS -
h2cidentifies the HTTP/2 protocol when using in clear text over TCP, such connections are established either with an HTTP/1.1 upgraded request or directly
To handle h2 requests, TLS must be enabled:
HttpServerConfig config = new HttpServerConfig()
.setSsl(true)
.setHttp2Config(new Http2ServerConfig()
.setInitialSettings(new Http2Settings()
.setMaxConcurrentStreams(250)));With plain text (TLS is disabled), the server handles h2c requests that wants to upgrade connections presenting an
HTTP/1.1 upgrade request to HTTP/2. It also accepts direct h2c (with prior knowledge) connections beginning with
the PRI * HTTP/2.0\r\nSM\r\n preface.
|
Warning
|
browsers do not support h2c, for serving websites you should use h2 and not h2c.
|
Http2ServerConfig captures the configuration of HTTP/2 specific aspects.
When a server accepts an HTTP/2 connection, it sends to the client its initial settings. These settings define how the client can use the connection, the default initial settings for a server are:
-
getMaxConcurrentStreams:100as recommended by the HTTP/2 RFC -
the default HTTP/2 settings values for the remaining settings
Vert.x supports HTTP/3 over QUIC (todo : make link to QUIC)
QUIC is new transport layer for HTTP that replaces TCP and has TLS built-in, in fact TLS is mandatory.
HttpServerConfig config = new HttpServerConfig()
.setVersions(HttpVersion.HTTP_3);Http3ServerConfig captures the configuration of HTTP/3 specific aspects.
QUIC implements some of the features of HTTP/2, such as the maximum number of concurrent streams a connection can handle:
HttpServerConfig config = new HttpServerConfig()
.setVersions(HttpVersion.HTTP_3);
QuicServerConfig quicConfig = config.getQuicConfig();
quicConfig.setTransportConfig(new QuicConfig()
.setInitialMaxStreamsBidi(250));A server can handle both TCP and QUIC.
HttpServerConfig config = new HttpServerConfig()
.setVersions(HttpVersion.HTTP_1_1, HttpVersion.HTTP_2, HttpVersion.HTTP_3)
.setSsl(true);
ServerSSLOptions sslOptions = new ServerSSLOptions()
.setKeyCertOptions(new JksOptions().setPath("/path/to/my/keystore"));
HttpServer server = vertx.createHttpServer(config, sslOptions);Hybrid servers bind two ports
-
a TCP port serving HTTP/1.x and/or HTTP/2 traffic
-
a QUIC port (UDP) serving HTTP/3 traffic
Each port can be configured independently
HttpServerConfig config = new HttpServerConfig()
.setVersions(HttpVersion.HTTP_1_1, HttpVersion.HTTP_2, HttpVersion.HTTP_3)
.setTcpPort(tcpPort)
.setQuicPort(quicPort);Or can be the same
HttpServerConfig config = new HttpServerConfig()
.setVersions(HttpVersion.HTTP_1_1, HttpVersion.HTTP_2, HttpVersion.HTTP_3)
.setPort(port);You can pass configuration to createHttpServer methods to configure an HTTP server.
Alternatively you can build a server with the builder API :
HttpServer server = vertx
.httpServerBuilder()
.with(config)
.build();In addition to HttpServerConfig and ServerSSLOptions, you
can set
-
a connection event handler notified when a client connects to this server
-
SSLEngineOptionsto configure the SSL engine
To tell the server to listen for incoming requests you use one of the listen
alternatives.
To tell the server to listen at the host and port as specified in the configuration:
HttpServer server = vertx.createHttpServer();
server.listen();Or to specify the host and port in the call to listen, ignoring what is configured in the configuration:
HttpServer server = vertx.createHttpServer();
server.listen(8080, "myhost.com");The default host is 0.0.0.0 which means 'listen on all available addresses' and the default port is 80.
The actual bind is asynchronous so the server might not actually be listening until some time after the call to listen has returned.
If you want to be notified when the server is actually listening you can provide a handler to the listen call.
For example:
HttpServer server = vertx.createHttpServer();
server
.listen(8080, "myhost.com")
.onComplete(res -> {
if (res.succeeded()) {
System.out.println("Server is now listening!");
} else {
System.out.println("Failed to bind!");
}
});When running on JDK 16+, or using a native transport, a server can listen to Unix domain sockets:
HttpServer httpServer = vertx.createHttpServer();
// Only available when running on JDK16+, or using a native transport
SocketAddress address = SocketAddress.domainSocketAddress("/var/tmp/myservice.sock");
httpServer
.requestHandler(req -> {
// Handle application
})
.listen(address)
.onComplete(ar -> {
if (ar.succeeded()) {
// Bound to socket
} else {
// Handle failure
}
});To be notified when a request arrives you need to set a requestHandler:
HttpServer server = vertx.createHttpServer();
server.requestHandler(request -> {
// Handle the request in here
});When a request arrives, the request handler is called passing in an instance of HttpServerRequest.
This object represents the server side HTTP request.
The handler is called when the headers of the request have been fully read.
If the request contains a body, that body will arrive at the server some time after the request handler has been called.
The server request object allows you to retrieve the uri,
path, params and
headers, amongst other things.
Each server request object is associated with one server response object. You use
response to get a reference to the HttpServerResponse
object.
Here’s a simple example of a server handling a request and replying with "hello world" to it.
vertx
.createHttpServer()
.requestHandler(request -> {
request.response().end("Hello world");
})
.listen(8080);The version of HTTP specified in the request can be retrieved with version
Use method to retrieve the HTTP method of the request.
(i.e. whether it’s GET, POST, PUT, DELETE, HEAD, OPTIONS, etc).
Use uri to retrieve the URI of the request.
Note that this is the actual URI as passed in the HTTP request, and it’s almost always a relative URI.
The URI is as defined in Section 5.1.2 of the HTTP specification - Request-URI
Use path to return the path part of the URI
For example, if the request URI was `a/b/c/page.html?param1=abc¶m2=xyz
Then the path would be /a/b/c/page.html
Use query to return the query string part of the URI
For example, if the request URI was a/b/c/page.html?param1=abc¶m2=xyz
Then the query would be param1=abc¶m2=xyz
Use headers to return the headers of the HTTP request.
This returns an instance of MultiMap - which is like a normal Map or Hash but allows multiple
values for the same key - this is because HTTP allows multiple header values with the same key.
It also has case-insensitive keys, that means you can do the following:
MultiMap headers = request.headers();
// Get the User-Agent:
System.out.println("User agent is " + headers.get("user-agent"));
// You can also do this and get the same result:
System.out.println("User agent is " + headers.get("User-Agent"));Use authority to return the authority of the HTTP request.
For HTTP/1.x requests the host header is returned, for HTTP/2 and HTTP/3 requests the :authority pseudo header is returned.
Use params to return the parameters of the HTTP request.
Just like headers this returns an instance of MultiMap
as there can be more than one parameter with the same name.
Request parameters are sent on the request URI, after the path. For example if the URI was /page.html?param1=abc¶m2=xyz
Then the parameters would contain the following:
param1: 'abc' param2: 'xyz
Note that these request parameters are retrieved from the URL of the request. If you have form attributes that
have been sent as part of the submission of an HTML form submitted in the body of a multi-part/form-data request
then they will not appear in the params here.
The address of the sender of the request can be retrieved with remoteAddress.
The URI passed in an HTTP request is usually relative. If you wish to retrieve the absolute URI corresponding
to the request, you can get it with absoluteURI
The endHandler of the request is invoked when the entire request,
including any body has been fully read.
Often an HTTP request contains a body that we want to read. As previously mentioned the request handler is called when just the headers of the request have arrived so the request object does not have a body at that point.
This is because the body may be very large (e.g. a file upload) and we don’t generally want to buffer the entire body in memory before handing it to you, as that could cause the server to exhaust available memory.
To receive the body, you can use the handler on the request,
this will get called every time a chunk of the request body arrives. Here’s an example:
request.handler(buffer -> {
System.out.println("I have received a chunk of the body of length " + buffer.length());
});The object passed into the handler is a Buffer, and the handler can be called
multiple times as data arrives from the network, depending on the size of the body.
In some cases (e.g. if the body is small) you will want to aggregate the entire body in memory, so you could do the aggregation yourself as follows:
Buffer totalBuffer = Buffer.buffer();
request.handler(buffer -> {
System.out.println("I have received a chunk of the body of length " + buffer.length());
totalBuffer.appendBuffer(buffer);
});
request.endHandler(v -> {
System.out.println("Full body received, length = " + totalBuffer.length());
});This is such a common case, that Vert.x provides a bodyHandler to do this
for you. The body handler is called once when all the body has been received:
request.bodyHandler(totalBuffer -> {
System.out.println("Full body received, length = " + totalBuffer.length());
});The request object is a ReadStream so you can pipe the request body to any
WriteStream instance.
See the chapter on streams for a detailed explanation.
HTML forms can be submitted with either a content type of application/x-www-form-urlencoded or multipart/form-data.
For url encoded forms, the form attributes are encoded in the url, just like normal query parameters.
For multi-part forms they are encoded in the request body, and as such are not available until the entire body has been read from the wire.
Multi-part forms can also contain file uploads.
If you want to retrieve the attributes of a multi-part form you should tell Vert.x that you expect to receive
such a form before any of the body is read by calling setExpectMultipart
with true, and then you should retrieve the actual attributes using formAttributes
once the entire body has been read:
server.requestHandler(request -> {
request.setExpectMultipart(true);
request.endHandler(v -> {
// The body has now been fully read, so retrieve the form attributes
MultiMap formAttributes = request.formAttributes();
});
});Form attributes have a maximum size of 8192 bytes. When the client submits a form with an attribute
size greater than this value, the file upload triggers an exception on HttpServerRequest exception handler. You
can set a different maximum size with setMaxFormAttributeSize.
Vert.x can also handle file uploads which are encoded in a multi-part request body.
To receive file uploads you tell Vert.x to expect a multi-part form and set an
uploadHandler on the request.
This handler will be called once for every upload that arrives on the server.
The object passed into the handler is a HttpServerFileUpload instance.
server.requestHandler(request -> {
request.setExpectMultipart(true);
request.uploadHandler(upload -> {
System.out.println("Processing a file upload " + upload.name());
});
});File uploads can be large we don’t provide the entire upload in a single buffer as that might result in memory exhaustion, instead, the upload data is received in chunks:
request.uploadHandler(upload -> {
upload.handler(chunk -> {
System.out.println("Received a chunk of the upload of length " + chunk.length());
});
});The upload object is a ReadStream so you can pipe the request body to any
WriteStream instance. See the chapter on streams for a
detailed explanation.
If you just want to upload the file to disk somewhere you can use streamToFileSystem:
request.uploadHandler(upload -> {
upload.streamToFileSystem("myuploads_directory/" + upload.filename());
});|
Warning
|
Make sure you check the filename in a production system to avoid malicious clients uploading files to arbitrary places on your filesystem. See security notes for more information. |
To remove a cookie, use removeCookie.
To add a cookie use addCookie.
The set of cookies will be written back in the response automatically when the response headers are written so the browser can store them.
Cookies are described by instances of Cookie. This allows you to retrieve the name,
value, domain, path and other normal cookie properties.
Same Site Cookies let servers require that a cookie shouldn’t be sent with cross-site (where Site is defined by the
registrable domain) requests, which provides some protection against cross-site request forgery attacks. This kind
of cookies are enabled using the setter: setSameSite.
Same site cookies can have one of 3 values:
-
None - The browser will send cookies with both cross-site requests and same-site requests.
-
Strict - The browser will only send cookies for same-site requests (requests originating from the site that set the cookie). If the request originated from a different URL than the URL of the current location, none of the cookies tagged with the Strict attribute will be included.
-
Lax - Same-site cookies are withheld on cross-site subrequests, such as calls to load images or frames, but will be sent when a user navigates to the URL from an external site; for example, by following a link.
Here’s an example of querying and adding cookies:
Cookie someCookie = request.getCookie("mycookie");
String cookieValue = someCookie.getValue();
// Do something with cookie...
// Add a cookie - this will get written back in the response automatically
request.response().addCookie(Cookie.cookie("othercookie", "somevalue"));Vert.x can handle compressed body payloads which are encoded by the client with the deflate, gzip, snappy or brotli algorithms.
To enable decompression set setDecompressionEnabled on the
compression configuration when creating the server.
Snappy is supported without external dependencies.
You need to have Brotli4j on the classpath to decompress Brotli, and Zstd-jni for Zstandard:
-
Maven (in your
pom.xml):
<dependency>
<groupId>com.aayushatharva.brotli4j</groupId>
<artifactId>brotli4j</artifactId>
<version>${brotli4j.version}</version>
</dependency>
<dependency>
<groupId>com.github.luben</groupId>
<artifactId>zstd-jni</artifactId>
<version>${zstd-jini.version}</version>
</dependency>-
Gradle (in your
build.gradlefile):
dependencies {
implementation 'com.aayushatharva.brotli4j:brotli4j:${brotli4j.version}'
runtimeOnly 'com.aayushatharva.brotli4j:native-$system-and-arch:${brotli4j.version}'
implementation 'com.github.luben:zstd-jni:${zstd-jini.version}'
}When using Gradle, you need to add the runtime native library manually depending on your OS and architecture. See the Gradle section of Brotli4j for more details.
By default, decompression is disabled.
HTTP/2 and HTTP/3 are framed protocol with various frames for the HTTP request/response model. The protocol allows other kind of frames to be sent or received.
To receive custom frames, you can use the customFrameHandler on the request,
this will get called every time a custom frame arrives. Here’s an example:
request.customFrameHandler(frame -> {
System.out.println("Received a frame type=" + frame.type() +
" payload" + frame.payload().toString());
});Custom frames are not subject to flow control - the frame handler will be called immediately when a custom frame is received independently of the streaming state.
The server response object is an instance of HttpServerResponse and is obtained from the
request with response.
You use the response object to write a response back to the HTTP client.
The default HTTP status code for a response is 200, representing OK.
Use setStatusCode to set a different code.
You can also specify a custom status message with setStatusMessage.
If you don’t specify a status message, the default one corresponding to the status code will be used.
|
Note
|
for HTTP/2 the status won’t be present in the response since the protocol won’t transmit the message to the client |
To write data to an HTTP response, you use one of the write operations.
These can be invoked multiple times before the response is ended. They can be invoked in a few ways:
With a single buffer:
HttpServerResponse response = request.response();
response.write(buffer);With a string. In this case the string will encoded using UTF-8 and the result written to the wire.
HttpServerResponse response = request.response();
response.write("hello world!");With a string and an encoding. In this case the string will encoded using the specified encoding and the result written to the wire.
HttpServerResponse response = request.response();
response.write("hello world!", "UTF-16");Writing to a response is asynchronous and always returns immediately after write has been queued.
If you are just writing a single string or buffer to the HTTP response you can write it and end the response in a
single call to the end
The first call to write results in the response header being written to the response. Consequently, if you are
not using HTTP chunking then you must set the Content-Length header before writing to the response, since it will
be too late otherwise. If you are using HTTP chunking you do not have to worry.
Once you have finished with the HTTP response you should end it.
This can be done in several ways:
With no arguments, the response is simply ended.
HttpServerResponse response = request.response();
response.write("hello world!");
response.end();It can also be called with a string or buffer in the same way write is called. In this case it’s just the same as
calling write with a string or buffer followed by calling end with no arguments. For example:
HttpServerResponse response = request.response();
response.end("hello world!");HTTP response headers can be added to the response by adding them directly to the
headers:
HttpServerResponse response = request.response();
MultiMap headers = response.headers();
headers.set("content-type", "text/html");
headers.set("other-header", "wibble");Or you can use putHeader
HttpServerResponse response = request.response();
response.putHeader("content-type", "text/html").putHeader("other-header", "wibble");Headers must all be added before any parts of the response body are written.
Vert.x supports HTTP Chunked Transfer Encoding.
This allows the HTTP response body to be written in chunks, and is normally used when a large response body is being streamed to a client and the total size is not known in advance.
You put the HTTP response into chunked mode as follows:
HttpServerResponse response = request.response();
response.setChunked(true);Default is non-chunked. When in chunked mode, each call to one of the write
methods will result in a new HTTP chunk being written out.
When in chunked mode you can also write HTTP response trailers to the response. These are actually written in the final chunk of the response.
|
Note
|
chunked response has no effect for an HTTP/2 or HTTP/3 stream |
To add trailers to the response, add them directly to the trailers.
HttpServerResponse response = request.response();
response.setChunked(true);
MultiMap trailers = response.trailers();
trailers.set("X-wibble", "woobble").set("X-quux", "flooble");Or use putTrailer.
HttpServerResponse response = request.response();
response.setChunked(true);
response
.putTrailer("X-wibble", "woobble")
.putTrailer("X-quux", "flooble");If you were writing a web server, one way to serve a file from disk would be to open it as an AsyncFile
and pipe it to the HTTP response.
Or you could load it in one go using readFile and write it straight to the response.
Alternatively, Vert.x provides a method which allows you to serve a file from disk or the filesystem to an HTTP response in one operation. Where supported by the underlying operating system this may result in the OS directly transferring bytes from the file to the socket without being copied through user-space at all.
This is done by using sendFile, and is usually more efficient for large
files, but may be slower for small files.
Here’s a very simple web server that serves files from the file system using sendFile:
vertx.createHttpServer().requestHandler(request -> {
String file = "";
if (request.path().equals("/")) {
file = "index.html";
} else if (!request.path().contains("..")) {
file = request.path();
}
request.response().sendFile("web/" + file);
}).listen(8080);The HTTP response uses the file name extension to set the HTTP response content type header when the file name extension
is well known by MimeMapping (lookup is case-insensitive).
Sending a file is asynchronous and may not complete until some time after the call has returned. If you want to
be notified when the file has been written you can use sendFile.
Please see the chapter about serving files from the classpath for restrictions about the classpath resolution or disabling it.
|
Note
|
If you use sendFile while using HTTPS it will copy through user-space, since if the kernel is copying data
directly from disk to socket it doesn’t give us an opportunity to apply any encryption.
|
|
Warning
|
If you’re going to write web servers directly using Vert.x be careful that users cannot exploit the path to access files outside the directory from which you want to serve them or the classpath It may be safer instead to use Vert.x Web. |
When there is a need to serve just a segment of a file, say starting from a given byte, you can achieve this by doing:
vertx.createHttpServer().requestHandler(request -> {
long offset = 0;
try {
offset = Long.parseLong(request.getParam("start"));
} catch (NumberFormatException e) {
// error handling...
}
long end = Long.MAX_VALUE;
try {
end = Long.parseLong(request.getParam("end"));
} catch (NumberFormatException e) {
// error handling...
}
request.response().sendFile("web/mybigfile.txt", offset, end);
}).listen(8080);You are not required to supply the length if you want to send a file starting from an offset until the end, in this case you can just do:
vertx.createHttpServer().requestHandler(request -> {
long offset = 0;
try {
offset = Long.parseLong(request.getParam("start"));
} catch (NumberFormatException e) {
// error handling...
}
request.response().sendFile("web/mybigfile.txt", offset);
}).listen(8080);The server response is a WriteStream so you can pipe to it from any
ReadStream, e.g. AsyncFile, NetSocket,
WebSocket or HttpServerRequest.
Here’s an example which echoes the request body back in the response for any PUT methods. It uses a pipe for the body, so it will work even if the HTTP request body is much larger than can fit in memory at any one time:
vertx.createHttpServer().requestHandler(request -> {
HttpServerResponse response = request.response();
if (request.method() == HttpMethod.PUT) {
response.setChunked(true);
request.pipeTo(response);
} else {
response.setStatusCode(400).end();
}
}).listen(8080);You can also use the send method to send a ReadStream.
Sending a stream is a pipe operation, however as this is a method of HttpServerResponse, it
will also take care of chunking the response when the content-length is not set.
vertx.createHttpServer().requestHandler(request -> {
HttpServerResponse response = request.response();
if (request.method() == HttpMethod.PUT) {
response.send(request);
} else {
response.setStatusCode(400).end();
}
}).listen(8080);HTTP/2 and HTTP/3 are framed protocol with various frames for the HTTP request/response model. The protocol allows other kind of frames to be sent or received.
To send such frames, you can use the writeCustomFrame on the response.
Here’s an example:
int frameType = 40;
int frameStatus = 10;
Buffer payload = Buffer.buffer("some data");
// Sending a frame to the client
response.writeCustomFrame(frameType, frameStatus, payload);These frames are sent immediately and are not subject to flow control - when such frame is sent there it may be done before other data frames.
cancel is a best effort to cancel a stream by the underlying HTTP protocol.
-
HTTP/1.x does not allow a clean cancellation of a request or a response stream, for example when a client uploads a resource already present on the server, the server needs to accept the entire response: the implementation closes the connection when the current request is inflight.
-
HTTP/2 supports stream reset at any time during the request/response: the implementation sends an HTTP/2 reset frame with the error
0x08 -
HTTP/3 relies on QUIC capabilities: the implementation performs a QUIC reset or abort reading with the code
0x10c
request.response().cancel();The request handler are notified of stream reset events with the request handler and
response handler:
request.response().exceptionHandler(err -> {
if (err instanceof StreamResetException) {
StreamResetException reset = (StreamResetException) err;
System.out.println("Stream reset " + reset.getCode());
}
});|
Note
|
stream reset should be avoided because the implementation works partially for HTTP/3 and reset error codes depends on the version of the protocol. |
You can set an exceptionHandler to receive any
exceptions that happens before the connection is passed to the requestHandler
or to the webSocketHandler, e.g. during the TLS handshake.
Vert.x will handle invalid HTTP requests and provides a default handler that will handle the common case
appropriately, e.g. it does respond with REQUEST_HEADER_FIELDS_TOO_LARGE when a request header is too long.
You can set your own invalidRequestHandler to process
invalid requests. Your implementation can handle specific cases and delegate other cases to to HttpServerRequest.DEFAULT_INVALID_REQUEST_HANDLER.
Vert.x comes with support for HTTP Compression out of the box.
This means you are able to automatically compress the body of the responses before they are sent back to the client.
If the client does not support HTTP compression the responses are sent back without compressing the body.
This allows to handle Client that support HTTP Compression and those that not support it at the same time.
To enable compression use can configure it with setCompressionEnabled.
By default, compression is not enabled.
When HTTP compression is enabled the server will check if the client includes an Accept-Encoding header which
includes the supported compressions. Commonly used are deflate and gzip. Both are supported by Vert.x.
If such a header is found the server will automatically compress the body of the response with one of the supported compressions and send it back to the client.
Whenever the response needs to be sent without compression you can set the header content-encoding to identity:
request.response()
.putHeader(HttpHeaders.CONTENT_ENCODING, HttpHeaders.IDENTITY)
.sendFile("/path/to/image.jpg");Be aware that compression may be able to reduce network traffic but is more CPU-intensive.
To address this latter issue Vert.x allows you to tune the 'compression level' parameter that is native of the gzip/deflate compression algorithms and also set the minimum response content size threshold for compression.
Compression level allows to configure gzip/deflate algorithms in terms of the compression ratio of the resulting data and the computational cost of the compress/decompress operation.
The compression level is an integer value ranged from '1' to '9', where '1' means lower compression ratio but fastest algorithm and '9' means maximum compression ratio available but a slower algorithm.
Using compression levels higher that 1-2 usually allows to save just some bytes in size - the gain is not linear, and depends on the specific data to be compressed - but it comports a non-trascurable cost in term of CPU cycles required to the server while generating the compressed response data ( Note that at moment Vert.x doesn’t support any form caching of compressed response data, even for static files, so the compression is done on-the-fly at every request body generation ) and in the same way it affects client(s) while decoding (inflating) received responses, operation that becomes more CPU-intensive the more the level increases.
It may not make sense to compress responses under certain size thresholds where the trade-off between CPU and saved network bytes is not beneficial.
The minimum response content size threshold for compression can be configured via setContentSizeThreshold.
For example, if set to '100', responses under 100 bytes will not be compressed. By default, it is '0' which means all content can be compressed.
Vert.x supports multiple compression algorithms:
-
Gzip
-
Deflate
-
Snappy
-
Brotli
-
Zstandard
you can configure them easily.
new HttpServerConfig()
.setCompression(new CompressionConfig()
.setCompressionEnabled(true)
.addGzip()
.addDeflate()
);Brotli and zstandard libraries need to be added to your dependencies.
-
Maven (in your
pom.xml):
<dependency>
<groupId>com.aayushatharva.brotli4j</groupId>
<artifactId>brotli4j</artifactId>
<version>${brotli4j.version}</version>
</dependency>
<dependency>
<groupId>com.github.luben</groupId>
<artifactId>zstd-jni</artifactId>
<version>${zstd-jini.version}</version>
</dependency>-
Gradle (in your
build.gradlefile):
dependencies {
implementation 'com.aayushatharva.brotli4j:brotli4j:${brotli4j.version}'
runtimeOnly 'com.aayushatharva.brotli4j:native-$system-and-arch:${brotli4j.version}'
implementation 'com.github.luben:zstd-jni:${zstd-jini.version}'
}When using Gradle, you need to add the runtime native library manually depending on your OS and architecture. See the Gradle section of Brotli4j for more details.
You can configure compressors according to your needs
new HttpServerConfig()
.setCompression(new CompressionConfig()
.addGzip(6, 15, 8));You create an HttpClient instance with the default configuration as follows:
HttpClientAgent client = vertx.createHttpClient();By default, the client supports HTTP/1, HTTP/2 in plain text.
If you want to configure the client, you create it as follows:
HttpClientConfig config = new HttpClientConfig()
.setVersions(HttpVersion.HTTP_1_1)
.setMaxRedirects(5)
.setHttp1Config(new Http1ClientConfig()
.setMaxInitialLineLength(1024));
HttpClientAgent client = vertx.createHttpClient(config);Configuration addresses the following parts:
-
HTTP versions
-
timeouts and limits
-
HTTP/1.x specific configuration
-
HTTP/2 specific configuration
-
HTTP/3 specific configuration
-
TLS
-
Transport: TCP and QUIC
-
Decompression
-
Observability
-
Network logging
Vert.x HTTP clients can be configured to use HTTPS in exactly the same way as TCP or QUIC clients.
ClientSSLOptions sslOptions = new ClientSSLOptions()
.setTrustOptions(
new JksOptions().
setPath("/path/to/your/truststore.jks").
setPassword("password-of-your-truststore")
);
HttpClientConfig config = new HttpClientConfig()
.setSsl(true);
HttpClientAgent client = vertx.createHttpClient(config);You can read more about SSL client configuration
Vert.x supports HTTP/1.1 and HTTP/1.0 over plaintext and TLS.
HttpClientConfig config = new HttpClientConfig()
.setVersions(HttpVersion.HTTP_1_1)
.setSsl(true)
.setHttp1Config(new Http1ClientConfig()
.setMaxInitialLineLength(1024));Http1ClientConfig captures the configuration of HTTP/1.x specific aspects.
Vert.x supports HTTP/2 over TLS h2 and over TCP h2c.
To perform h2 requests, TLS must be enabled:
HttpClientConfig config = new HttpClientConfig()
.setVersions(HttpVersion.HTTP_2)
.setSsl(true)
.setHttp2Config(new Http2ClientConfig()
.setKeepAliveTimeout(Duration.ofSeconds(10)));Http2ClientConfig captures the configuration of HTTP/2 specific aspects.
Vert.x supports HTTP/3 over QUIC (todo : make link to QUIC)
HttpClientConfig config = new HttpClientConfig()
.setVersions(HttpVersion.HTTP_3)
.setSsl(true)
.setHttp3Config(new Http3ClientConfig()
.setKeepAliveTimeout(Duration.ofSeconds(10)));Http3ClientConfig captures the configuration of HTTP/3 specific aspects.
A client can mix TCP and QUIC at the same time.
HttpClientConfig config = new HttpClientConfig()
.setVersions(HttpVersion.HTTP_1_1, HttpVersion.HTTP_2, HttpVersion.HTTP_3)
.setSsl(true)
.setFollowAlternativeServices(true);
ClientSSLOptions sslOptions = new ClientSSLOptions()
.setKeyCertOptions(new JksOptions().setPath("/path/to/my/keystore"));
HttpClientAgent client = vertx.createHttpClient(config, sslOptions);When running on JDK 16+, or using a native transport, a client can connect to Unix domain sockets:
HttpClient httpClient = vertx.createHttpClient();
// Only available when running on JDK16+, or using a native transport
SocketAddress addr = SocketAddress.domainSocketAddress("/var/tmp/myservice.sock");
// Send request to the server
httpClient.request(new RequestOptions()
.setServer(addr)
.setHost("localhost")
.setPort(8080)
.setURI("/"))
.compose(request -> request.send().compose(HttpClientResponse::body))
.onComplete(ar -> {
if (ar.succeeded()) {
// Process response
} else {
// Handle failure
}
});For performance purpose, the client uses connection pooling when interacting with HTTP servers.
The concurrency of an HTTP/1.1 server is defined by the client, i.e. the client uses a client defined number of connections to improve performance. However, the concurrency of an HTTP/2 or HTTP/3 server is defined by the maximum number of concurrent streams a server allows on a single connection.
By default, the pool creates up to 5 connections per HTTP/1.1 server and a single connection for other protocols, as recommended.
You can override the pool configuration like this:
PoolOptions options = new PoolOptions().setHttp1MaxSize(10);
HttpClientAgent client = vertx.createHttpClient(options);Normally, you should not need more than a single connection for HTTP/2 or HTTP/3.
You can configure various pool options as follows
-
options#setHttp1MaxSizethe maximum number of opened per HTTP/1.x server (5 by default) -
options#setHttp2MaxSizethe maximum number of opened per HTTP/2 server (1 by default), you should not change this value since a single HTTP/2 connection is capable of delivering the same performance level than multiple HTTP/1.x connections -
options#setHttp3MaxSizethe maximum number of opened per HTTP/3 server (1 by default), you should not change this value since a single HTTP/3 connection is capable of delivering the same performance level than multiple HTTP/1.x connections -
options#setCleanerPeriodthe period in milliseconds at which the pool checks expired connections (1 second by default) -
options#setEventLoopSizesets the number of event loops the pool use-
the default value
0configures the pool to use the event loop of the caller -
a positive value configures the pool load balance the creation of connection over a list of event loops determined by the value
-
-
options#setMaxWaitQueueSizethe maximum number of HTTP requests waiting until a connection is available, when the queue is full, the request is rejected
You can pass configuration to createHttpClient methods to configure an HTTP client.
Alternatively you can build a client with the builder API :
HttpClientAgent client = vertx
.httpClientBuilder()
.with(config)
.build();In addition to HttpClientConfig, {ClientSSLOptions and
PoolOptions, you can set
-
a connection event handler notified when the client connects to a server
-
a redirection handler to implement an alternative HTTP redirect behavior
-
SSLEngineOptionsto configure the SSL engine
The http client is very flexible and there are various ways you can make requests with it.
The first step when making a request is obtaining an HTTP connection to the remote server:
client
.request(HttpMethod.GET, 8080, "myserver.mycompany.com", "/some-uri")
.onSuccess(ar1 -> {
// Connected to the server
});The client will connect to the remote server or reuse an available connection from the client connection pool.
Often you want to make many requests to the same host/port with an http client. To avoid you repeating the host/port every time you make a request you can configure the client with a default host/port:
HttpClientConfig config = new HttpClientConfig()
.setDefaultHost("wibble.com");
// Can also set default port if you want...
HttpClientAgent client = vertx.createHttpClient(config);
client
.request(HttpMethod.GET, "/some-uri")
.compose(request -> request.send())
.onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});You can write headers to a request using the HttpHeaders as follows:
HttpClientAgent client = vertx.createHttpClient();
// Write some headers using the headers multi-map
MultiMap headers = HttpHeaders.set("content-type", "application/json").set("other-header", "foo");
client
.request(HttpMethod.GET, "some-uri")
.compose(request -> {
request.headers().addAll(headers);
return request.send();
})
.onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});The headers are an instance of MultiMap which provides operations for adding, setting and removing
entries. Http headers allow more than one value for a specific key.
You can also write headers using putHeader
request.putHeader("content-type", "application/json")
.putHeader("other-header", "foo");If you wish to write headers to the request you must do so before any part of the request body is written.
The HttpClientRequest request methods connects to the remote server
or reuse an existing connection. The request instance obtained is pre-populated with some data
such like the host or the request URI, but you need to send this request to the server.
You can call send to send a request such as an HTTP
GET and process the asynchronous HttpClientResponse.
client
.request(HttpMethod.GET, 8080, "myserver.mycompany.com", "/some-uri")
// Send the request
.compose(request -> request.send())
// And process the response
.onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});You can also send the request with a body.
send with a string, the Content-Length
header will be set for you if it was not previously set.
client
.request(HttpMethod.GET, 8080, "myserver.mycompany.com", "/some-uri")
// Send the request
.compose(request -> request.send("Hello World"))
// And process the response
.onComplete(ar -> {
if (ar.succeeded()) {
HttpClientResponse response = ar.result();
System.out.println("Received response with status code " + response.statusCode());
} else {
System.out.println("Something went wrong " + ar.cause().getMessage());
}
});send with a buffer, the
Content-Length header will be set for you if it was not previously set.
request
.send(Buffer.buffer("Hello World"))
.onComplete(ar -> {
if (ar.succeeded()) {
HttpClientResponse response = ar.result();
System.out.println("Received response with status code " + response.statusCode());
} else {
System.out.println("Something went wrong " + ar.cause().getMessage());
}
});send with a stream, if
the Content-Length header was not previously set, the request is sent with a chunked Content-Encoding.
request
.putHeader(HttpHeaders.CONTENT_LENGTH, "1000")
.send(stream)
.onComplete(ar -> {
if (ar.succeeded()) {
HttpClientResponse response = ar.result();
System.out.println("Received response with status code " + response.statusCode());
} else {
System.out.println("Something went wrong " + ar.cause().getMessage());
}
});The send method send requests at once.
Sometimes you’ll want to have low level control on how you write requests bodies.
The HttpClientRequest can be used to write the request body.
Here are some examples of writing a POST request with a body:
HttpClientAgent client = vertx.createHttpClient();
client.request(HttpMethod.POST, "some-uri")
.onSuccess(request -> {
request
.response()
.onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});
// Now do stuff with the request
request.putHeader("content-length", "1000");
request.putHeader("content-type", "text/plain");
request.write(body);
// Make sure the request is ended when you're done with it
request.end();
});Methods exist to write strings in UTF-8 encoding and in any specific encoding and to write buffers:
request.write("some data");
// Write string encoded in specific encoding
request.write("some other data", "UTF-16");
// Write a buffer
request.write(Buffer.buffer()
.appendInt(123)
.appendLong(245L)
);When you’re writing to a request, the first call to write will result in the request headers being written
out to the wire.
The actual write is asynchronous and might not occur until some time after the call has returned.
Non-chunked HTTP requests with a request body require a Content-Length header to be provided.
Consequently, if you are not using chunked HTTP then you must set the Content-Length header before writing
to the request, as it will be too late otherwise.
If you are calling one of the end methods that take a string or buffer then Vert.x will automatically calculate
and set the Content-Length header before writing the request body.
If you are using HTTP chunking a Content-Length header is not required, so you do not have to calculate the size
up-front.
Once you have finished with the HTTP request you must end it with one of the end
operations.
Ending a request causes any headers to be written, if they have not already been written and the request to be marked as complete.
Requests can be ended in several ways. With no arguments the request is simply ended:
request.end();Or a string or buffer can be provided in the call to end. This is like calling write with the string or buffer
before calling end with no arguments
request.end("some-data");
// End it with a buffer
Buffer buffer = Buffer.buffer().appendFloat(12.3f).appendInt(321);
request.end(buffer);An HttpClientRequest instance is also a WriteStream instance.
You can pipe to it from any ReadStream instance.
For, example, you could pipe a file on disk to an http request body as follows:
request.setChunked(true);
file.pipeTo(request);Vert.x supports HTTP Chunked Transfer Encoding for requests.
This allows the HTTP request body to be written in chunks, and is normally used when a large request body is being streamed to the server, whose size is not known in advance.
You put the HTTP request into chunked mode using setChunked.
In chunked mode each call to write will cause a new chunk to be written to the wire. In chunked mode there is
no need to set the Content-Length of the request up-front.
request.setChunked(true);
// Write some chunks
for (int i = 0; i < 10; i++) {
request.write("this-is-chunk-" + i);
}
request.end();You can send http form submissions bodies with the send
variant.
ClientForm form = ClientForm.form();
form.attribute("firstName", "Dale");
form.attribute("lastName", "Cooper");
// Submit the form as a form URL encoded body
request
.send(form)
.onSuccess(res -> {
// OK
});By default, the form is submitted with the application/x-www-form-urlencoded content type header. You can set
the content-type header to multipart/form-data instead
ClientForm form = ClientForm.form();
form.attribute("firstName", "Dale");
form.attribute("lastName", "Cooper");
// Submit the form as a multipart form body
request
.putHeader("content-type", "multipart/form-data")
.send(form)
.onSuccess(res -> {
// OK
});If you want to upload files and send attributes, you can create a ClientMultipartForm instead.
ClientMultipartForm form = ClientMultipartForm.multipartForm()
.attribute("imageDescription", "a very nice image")
.binaryFileUpload(
"imageFile",
"image.jpg",
"/path/to/image",
"image/jpeg");
// Submit the form as a multipart form body
request
.send(form)
.onSuccess(res -> {
// OK
});A client configured with SSL trust can perform HTTPS request.
HttpClientAgent client = vertx.createHttpClient(new HttpClientConfig()
.setSsl(true), sslOptions);
// Use the global default configuration with TLS enabled
client
.request(new RequestOptions()
.setHost("localhost")
.setPort(8080)
.setURI("/"))
.compose(request -> request.send())
.onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});The setSsl setting acts as the default client setting.
SSL can also be enabled/disabled per request with RequestOptions or when specifying a
scheme with setAbsoluteURI method.
HttpClientAgent client = vertx.createHttpClient(new HttpClientConfig(), sslOptions);
// Override the default configuration and use TLS
client
.request(new RequestOptions()
.setHost("localhost")
.setPort(8080)
.setURI("/")
.setSsl(true))
.compose(request -> request.send())
.onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});The setSsl overrides the default client setting
-
setting the value to
falsewill disable SSL/TLS even if the client is configured to use SSL/TLS -
setting the value to
truewill enable SSL/TLS even if the client is configured to not use SSL/TLS, the actual client SSL/TLS (such as trust, key/certificate, ciphers, ALPN, …) will be reused
Likewise setAbsoluteURI scheme
also overrides the default client setting.
By default, a request use one of the HTTP versions configured by HttpClientConfig.
The config versions is an ordered list, the client uses the first version of the list.
You can also set a version on the request when the request requires a specific version:
client
.request(new RequestOptions()
.setProtocolVersion(HttpVersion.HTTP_2)
.setHost("localhost")
.setPort(8080)
.setURI("/"))
.compose(request -> request.send())
.onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});This can be used with a hybrid HTTP client to request an HTTP/3 server: a hybrid client can only determine the IP address for HTTP over TCP using DNS. Setting the request version instructs the client that correct HTTP version to use:
client
.request(new RequestOptions()
.setProtocolVersion(HttpVersion.HTTP_3)
.setAbsoluteURI("https://google.com"))
.compose(request -> request.send())
.onSuccess(response -> {
System.out.println("Received response with HTTP version " + response.request().version());
});This gap is addressed by:
-
HTTP Alternative Services, fully supported
-
Service Binding and Parameter Specification via the DNS, it should be implemented in the near future
Calling setFollowAlternativeServices configures the client to handle alt-svc notifications
and use server advertised protocols.
HttpClientConfig config = new HttpClientConfig()
.setFollowAlternativeServices(true)
.setVersions(HttpVersion.HTTP_1_1, HttpVersion.HTTP_3)
.setSsl(true);
HttpClientAgent client = vertx.createHttpClient(config, sslOptions);For instance, https://google/com responds to HTTP/1.1 requests with an alternative service for h3 at same address
and port (UDP), subsequent calls to this server can use HTTP/3 instead of HTTP/1.1.
client
.request(new RequestOptions().setAbsoluteURI("https://google.com"))
.compose(request -> request.send())
.onSuccess(response -> {
System.out.println("Received response with HTTP version " + response.request().version());
});The client processes alt-svc notifications in the background and tries to connect to the advertised servers before considering them as valid.
You can set an idle timeout to prevent your application from unresponsive servers using setIdleTimeout or idleTimeout. When the request does not return any data within the timeout the request will be cancelled.
Future<Buffer> fut = client
.request(new RequestOptions()
.setHost(host)
.setPort(port)
.setURI(uri)
.setIdleTimeout(timeoutMS))
.compose(request -> request
.send()
.compose(HttpClientResponse::body));|
Note
|
the timeout starts when the HttpClientRequest is available, implying a connection was
obtained from the pool.
|
You can set a connect timeout to prevent your application from unresponsive busy client connection pool. The
Future<HttpClientRequest> is failed when a connection is not obtained before the timeout delay.
The connect timeout option is not related to the TCP connect timeout, when a request is made against a pooled HTTP client, the timeout applies to the duration to obtain a connection from the pool to serve the request, the timeout might fire because the server does not respond in time or the pool is too busy to serve a request.
You can configure both timeout using setTimeout
Future<Buffer> fut = client
.request(new RequestOptions()
.setHost(host)
.setPort(port)
.setURI(uri)
.setTimeout(timeoutMS))
.compose(request -> request
.send()
.compose(HttpClientResponse::body));HTTP/2 and HTTP/3 are framed protocol with various frames for the HTTP request/response model. The protocol allows other kind of frames to be sent or received.
To send such frames, you can use the writeCustomFrame on the response.
Here’s an example:
int frameType = 40;
int frameStatus = 10;
Buffer payload = Buffer.buffer("some data");
// Sending a frame to the server
request.writeCustomFrame(frameType, frameStatus, payload);These frames are sent immediately and are not subject to flow control - when such frame is sent there it may be done before other data frames.
cancel is a best effort to cancel a stream by the underlying HTTP protocol.
-
HTTP/1.x does not allow a clean cancellation of a request or a response stream, for example when a client uploads a resource already present on the server, the server needs to accept the entire response: the implementation closes the connection when the current request is inflight.
-
HTTP/2 supports stream reset at any time during the request/response: the implementation sends an HTTP/2 reset frame with the error
0x08 -
HTTP/3 relies on QUIC capabilities: the implementation performs a QUIC reset or abort reading with the code
0x10c
request.cancel();The request handler are notified of stream cancellation events with the request handler and response handler:
request.exceptionHandler(err -> {
if (err instanceof StreamResetException) {
StreamResetException reset = (StreamResetException) err;
System.out.println("Stream reset " + reset.getCode());
}
});|
Note
|
stream reset should be avoided because the implementation works partially for HTTP/3 and reset error codes depends on the version of the protocol. |
You receive an instance of HttpClientResponse into the handler that you specify in of
the request methods or by setting a handler directly on the HttpClientRequest object.
You can query the status code and the status message of the response with statusCode
and statusMessage.
request
.send()
.onSuccess(response -> {
// the status code - e.g. 200 or 404
System.out.println("Status code is " + response.statusCode());
// the status message e.g. "OK" or "Not Found".
System.out.println("Status message is " + response.statusMessage());
});The HttpClientResponse instance is also a ReadStream which means
you can pipe it to any WriteStream instance.
Http responses can contain headers. Use headers to get the headers.
The object returned is a MultiMap as HTTP headers can contain multiple values for single keys.
String contentType = response.headers().get("content-type");
String contentLength = response.headers().get("content-lengh");Chunked HTTP responses can also contain trailers - these are sent in the last chunk of the response body.
The response handler is called when the headers of the response have been read from the wire.
If the response has a body this might arrive in several pieces some time after the headers have been read. We don’t wait for all the body to arrive before calling the response handler as the response could be very large and we might be waiting a long time, or run out of memory for large responses.
As parts of the response body arrive, the handler is called with
a Buffer representing the piece of the body:
client
.request(HttpMethod.GET, "some-uri")
.compose(request -> request.send())
.onSuccess(response -> {
response.handler(buffer -> {
System.out.println("Received a part of the response body: " + buffer);
});
});If you know the response body is not very large and want to aggregate it all in memory before handling it, you can either aggregate it yourself:
request
.send()
.onSuccess(response -> {
// Create an empty buffer
Buffer totalBuffer = Buffer.buffer();
response.handler(buffer -> {
System.out.println("Received a part of the response body: " + buffer.length());
totalBuffer.appendBuffer(buffer);
});
response.endHandler(v -> {
// Now all the body has been read
System.out.println("Total response body length is " + totalBuffer.length());
});
});Or you can use the convenience body which
is called with the entire body when the response has been fully read:
request
.send()
.compose(response -> response.body())
.onSuccess(body -> {
// Now all the body has been read
System.out.println("Total response body length is " + body.length());
});The response endHandler is called when the entire response body has been read
or immediately after the headers have been read and the response handler has been called if there is no body.
The client interface is very simple and follows this pattern:
-
requesta connection -
sendorwrite/endthe request to the server -
handle the beginning of the
HttpClientResponse -
process the response events
You can use Vert.x future composition methods to make your code simpler, however the API is event driven, and you need to understand it otherwise you might experience possible data races (i.e. loosing events leading to corrupted data).
|
Note
|
Vert.x Web Client is a higher level API alternative (in fact it is built on top of this client) you might consider if this client is too low level for your use cases |
The client API intentionally does not return a Future<HttpClientResponse> because setting a completion
handler on the future can be racy when this is set outside the event-loop.
Future<HttpClientResponse> get = client.get("some-uri");
// Assuming we have a client that returns a future response
// assuming this is *not* on the event-loop
// introduce a potential data race for the sake of this example
Thread.sleep(100);
get.onSuccess(response -> {
// Response events might have happened already
response
.body()
.onComplete(ar -> {
});
});Confining the HttpClientRequest usage within a verticle is the easiest solution as the Verticle
will ensure that events are processed sequentially avoiding races.
vertx.deployVerticle(() -> new AbstractVerticle() {
@Override
public void start() {
HttpClient client = vertx.createHttpClient();
Future<HttpClientRequest> future = client.request(HttpMethod.GET, "some-uri");
}
}, new DeploymentOptions());When you are interacting with the client possibly outside a verticle then you can safely perform composition as long as you do not delay the response events, e.g. processing directly the response on the event-loop.
Future<JsonObject> future = client
.request(HttpMethod.GET, "some-uri")
.compose(request -> request
.send()
.compose(response -> {
// Process the response on the event-loop which guarantees no races
if (response.statusCode() == 200 &&
response.getHeader(HttpHeaders.CONTENT_TYPE).equals("application/json")) {
return response
.body()
.map(buffer -> buffer.toJsonObject());
} else {
return Future.failedFuture("Incorrect HTTP response");
}
}));
// Listen to the composed final json result
future.onComplete(ar -> {
if (ar.succeeded()) {
System.out.println("Received json result " + ar.result());
} else {
System.out.println("Something went wrong " + ar.cause().getMessage());
}
});You can also guard the response body with HTTP responses expectations.
Future<JsonObject> future = client
.request(HttpMethod.GET, "some-uri")
.compose(request -> request
.send()
.expecting(HttpResponseExpectation.SC_OK.and(HttpResponseExpectation.JSON))
.compose(response -> response
.body()
.map(buffer -> buffer.toJsonObject())));
// Listen to the composed final json result
future.onComplete(ar -> {
if (ar.succeeded()) {
System.out.println("Received json result " + ar.result());
} else {
System.out.println("Something went wrong " + ar.cause().getMessage());
}
});If you need to delay the response processing then you need to pause the response or use a pipe, this
might be necessary when another asynchronous operation is involved.
Future<Void> future = client
.request(HttpMethod.GET, "some-uri")
.compose(request -> request
.send()
.compose(response -> {
// Process the response on the event-loop which guarantees no races
if (response.statusCode() == 200) {
// Create a pipe, this pauses the response
Pipe<Buffer> pipe = response.pipe();
// Write the file on the disk
return fileSystem
.open("/some/large/file", new OpenOptions().setWrite(true))
.onFailure(err -> pipe.close())
.compose(file -> pipe.to(file));
} else {
return Future.failedFuture("Incorrect HTTP response");
}
}));As seen above, you must perform sanity checks manually after the response is received.
You can trade flexibility for clarity and conciseness using response expectations.
Response expectations can guard the control flow when the response does
not match a criteria.
The HTTP Client comes with a set of out of the box predicates ready to use:
Future<Buffer> fut = client
.request(options)
.compose(request -> request
.send()
.expecting(HttpResponseExpectation.SC_SUCCESS)
.compose(response -> response.body()));You can also create custom predicates when existing predicates don’t fit your needs:
HttpResponseExpectation methodsExpectation =
resp -> {
String methods = resp.getHeader("Access-Control-Allow-Methods");
return methods != null && methods.contains("POST");
};
// Send pre-flight CORS request
client
.request(new RequestOptions()
.setMethod(HttpMethod.OPTIONS)
.setPort(8080)
.setHost("myserver.mycompany.com")
.setURI("/some-uri")
.putHeader("Origin", "Server-b.com")
.putHeader("Access-Control-Request-Method", "POST"))
.compose(request -> request
.send()
.expecting(methodsExpectation))
.onComplete(res -> {
if (res.succeeded()) {
// Process the POST request now
} else {
System.out.println("Something went wrong " + res.cause().getMessage());
}
});As a convenience, the HTTP Client ships a few predicates for common uses cases .
For status codes, e.g. HttpResponseExpectation.SC_SUCCESS to verify that the
response has a 2xx code, you can also create a custom one:
client
.request(options)
.compose(request -> request
.send()
.expecting(HttpResponseExpectation.status(200, 202)))
.onSuccess(res -> {
// ....
});For content types, e.g. HttpResponseExpectation.JSON to verify that the
response body contains JSON data, you can also create a custom one:
client
.request(options)
.compose(request -> request
.send()
.expecting(HttpResponseExpectation.contentType("some/content-type")))
.onSuccess(res -> {
// ....
});Please refer to the HttpResponseExpectation documentation for a full list of predefined expectations.
By default, expectations (including the predefined ones) conveys a simple error message. You can customize the exception class by changing the error converter:
Expectation<HttpResponseHead> expectation = HttpResponseExpectation.SC_SUCCESS
.wrappingFailure((resp, err) -> new MyCustomException(resp.statusCode(), err.getMessage()));|
Warning
|
creating exception in Java can have a performance cost when it captures a stack trace, so you might want to create exceptions that do not capture the stack trace. By default exceptions are reported using an exception that does not capture the stack trace. |
You can retrieve the list of cookies from a response using cookies.
Alternatively you can just parse the Set-Cookie headers yourself in the response.
The client can be configured to follow HTTP redirections provided by the Location response header when the client receives:
-
a
301,302,307or308status code along with an HTTP GET or HEAD method -
a
303status code, in addition the directed request perform an HTTP GET method
Here’s an example:
client
.request(HttpMethod.GET, "some-uri")
.compose(request -> request
.setFollowRedirects(true)
.send())
.onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});The maximum redirects is 16 by default and can be changed with setMaxRedirects.
HttpClientAgent client = vertx.createHttpClient(
new HttpClientConfig()
.setMaxRedirects(32));
client
.request(HttpMethod.GET, "some-uri")
.compose(request -> request.setFollowRedirects(true).send())
.onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});One size does not fit all and the default redirection policy may not be adapted to your needs.
The default redirection policy can changed with a custom implementation:
HttpClientAgent client = vertx.httpClientBuilder()
.withRedirectHandler(response -> {
// Only follow 301 code
if (response.statusCode() == 301 && response.getHeader("Location") != null) {
// Compute the redirect URI
String absoluteURI = resolveURI(response.request().absoluteURI(), response.getHeader("Location"));
// Create a new ready to use request that the client will use
return Future.succeededFuture(new RequestOptions().setAbsoluteURI(absoluteURI));
}
// We don't redirect
return null;
})
.build();The policy handles the original HttpClientResponse received and returns either null
or a Future<HttpClientRequest>.
-
when
nullis returned, the original response is processed -
when a future is returned, the request will be sent on its successful completion
-
when a future is returned, the exception handler set on the request is called on its failure
The returned request must be unsent so the original request handlers can be sent and the client can send it after.
Most of the original request settings will be propagated to the new request:
-
request headers, unless if you have set some headers
-
request body unless the returned request uses a
GETmethod -
response handler
-
request exception handler
-
request timeout
HTTP tunnels can be created with connect:
client.request(HttpMethod.CONNECT, "some-uri")
// Connect to the server
.compose(request -> request
.connect()
.expecting(HttpResponseExpectation.SC_OK))
.onSuccess(response -> {
// Tunnel created, raw buffers are transmitted on the wire
NetSocket socket = response.netSocket();
});The handler will be called after the HTTP response header is received, the socket will be ready for tunneling and will send and receive buffers.
connect works like send, but it reconfigures the transport to exchange
raw buffers.
HTTP/2 and HTTP/3 are framed protocol with various frames for the HTTP request/response model. The protocol allows other kind of frames to be sent or received.
To receive custom frames, you can use the customFrameHandler on the response, this will get called every time a custom frame arrives. Here’s an example:
response.customFrameHandler(frame -> {
System.out.println("Received a frame type=" + frame.type() +
" payload" + frame.payload().toString());
});The http client comes with support for HTTP decompression out of the box.
This means the client can let the remote http server know that it supports compression, and will be able to handle compressed response bodies.
An http server is free to either compress with one of the supported compression algorithms or to send the body back without compressing it at all. So this is only a hint for the Http server which it may ignore at will.
To tell the http server which compression algorithm is supported by the client it will include an Accept-Encoding header
with the supported compression algorithm as value. Multiple compression algorithms are supported. In case of Vert.x this
will result in the following header added:
Accept-Encoding: gzip, deflateThe server will choose then from one of these. You can detect if a server compressed the body by checking for the
Content-Encoding header in the response sent back from it.
If the body of the response was compressed via gzip it will include for example the following header:
Content-Encoding: gzipTo enable decompression set setDecompressionEnabled on the configuration
used when creating the client.
By default decompression is disabled.
By default, when the client resolves a hostname to a list of several IP addresses, the client uses the first returned IP address.
The http client can be configured to perform client side load balancing instead
HttpClientAgent client = vertx
.httpClientBuilder()
.withLoadBalancer(LoadBalancer.ROUND_ROBIN)
.build();Vert.x provides out of the box several load balancing policies you can use
Most load balancing policies are pretty much self-explanatory.
Hash based routing can be achieved with the LoadBalancer.CONSISTENT_HASHING policy.
HttpClientAgent client = vertx
.httpClientBuilder()
.withLoadBalancer(LoadBalancer.CONSISTENT_HASHING)
.build();
HttpServer server = vertx.createHttpServer()
.requestHandler(inboundReq -> {
// Get a routing key, in this example we will hash the incoming request host/ip
// it could be anything else, e.g. user id, request id, ...
String routingKey = inboundReq.remoteAddress().hostAddress();
client.request(new RequestOptions()
.setHost("example.com")
.setURI("/test")
.setRoutingKey(routingKey))
.compose(outboundReq -> outboundReq.send()
.expecting(HttpResponseExpectation.SC_OK)
.compose(HttpClientResponse::body))
.onComplete(ar -> {
if (ar.succeeded()) {
Buffer response = ar.result();
}
});
});
server.listen(servicePort);The default consistent hashing policy uses 4 virtual nodes per server and uses a random policy in the absence of a routing key.
You can create a policy configuration that best fit your needs
LoadBalancer loadBalancer = LoadBalancer.consistentHashing(10, LoadBalancer.POWER_OF_TWO_CHOICES);Custom load balancing policies can also be used.
LoadBalancer loadBalancer = endpoints -> {
// Returns an endpoint selector for the given endpoints
// a selector is a stateful view of the provided immutable list of endpoints
return () -> indexOfEndpoint(endpoints);
};
HttpClientAgent client = vertx
.httpClientBuilder()
.withLoadBalancer(loadBalancer)
.build();Http keep alive allows http connections to be used for more than one request. This can be a more efficient use of connections when you’re making multiple requests to the same server.
For HTTP/1.x versions, the http client supports pooling of connections, allowing you to reuse connections between requests.
For pooling to work, keep alive must be true using setKeepAlive
on the HTTP/1.1 configuration used when configuring the client. The default value is true.
When keep alive is enabled. Vert.x will add a Connection: Keep-Alive header to each HTTP/1.0 request sent.
When keep alive is disabled. Vert.x will add a Connection: Close header to each HTTP/1.1 request sent to signal
that the connection will be closed after completion of the response.
The maximum number of connections to pool for each server is configured using setHttp1MaxSize
When making a request with pooling enabled, Vert.x will create a new connection if there are less than the maximum number of connections already created for that server, otherwise it will add the request to a queue.
Keep alive connections will be closed by the client automatically after a timeout. The timeout can be specified
by the server using the keep-alive header:
keep-alive: timeout=30You can set the default timeout using setKeepAliveTimeout - any
connections not used within this timeout will be closed. Please note the timeout value is in seconds not milliseconds.
The client also supports pipe-lining of requests on a connection.
Pipe-lining means another request is sent on the same connection before the response from the preceding one has returned. Pipe-lining is not appropriate for all requests.
To enable pipe-lining, it must be enabled using setPipelining.
By default, pipe-lining is disabled.
When pipe-lining is enabled requests will be written to connections without waiting for previous responses to return.
The number of pipe-lined requests over a single connection is limited by setPipeliningLimit.
This option defines the maximum number of http requests sent to the server awaiting for a response. This limit ensures the
fairness of the distribution of the client requests over the connections to the same server.
Multiplexed HTTP protocols (HTTP/2 and HTTP/3) advocate to use a single connection to a server, by default the http client uses a single connection for each server, all the streams to the same server are multiplexed over the same connection.
When the client needs to use more than a single connection and use pooling
When it is desirable to limit the number of multiplexed streams per connection and use a connection
pool instead of a single connection, setMultiplexingLimit
or setMultiplexingLimit can be used.
Http2ClientConfig http2Config = new Http2ClientConfig()
.setMultiplexingLimit(10);
HttpClient client = vertx.createHttpClient(
new HttpClientConfig()
.setHttp2Config(http2Config),
new PoolOptions()
.setHttp2MaxSize(3)
);The multiplexing limit for a connection is a setting set on the client that limits the number of streams
of a single connection. The effective value can be even lower if the server sets a lower limit
with the SETTINGS_MAX_CONCURRENT_STREAMS setting.
HTTP/2 or HTTP/3 connections will not be closed by the client automatically. To close them you can call close
or close the client instance.
Alternatively you can set idle timeout using setIdleTimeout - any
connections not used within this timeout will be closed. Please note the idle timeout value is in seconds not milliseconds.
Most HTTP interactions are performed using HttpClientAgent request/response API: the client obtains
a connection from its pool of connections to perform a request.
Alternatively, you can connect directly to a server (bypassing the connection pool) and get an HTTP client connection.
HttpConnectOptions connectOptions = new HttpConnectOptions()
.setHost("example.com")
.setPort(80);
Future<HttpClientConnection> fut = client.connect(connectOptions);The HttpClientConnection can create HttpClientRequest
connection
.request()
.onSuccess(request -> {
request.setMethod(HttpMethod.GET);
request.setURI("/some-uri");
Future<HttpClientResponse> response = request.send();
});|
Tip
|
HttpClientConnection extends HttpClient
|
A client connection can handle a certain amount of concurrent requests. When the max number of connection is reached, any subsequent request is queued until a slot is available.
According to the HTTP 1.1 specification a client can set a
header Expect: 100-Continue and send the request header before sending the rest of the request body.
The server can then respond with an interim response status Status: 100 (Continue) to signify to the client that
it is ok to send the rest of the body.
The idea here is it allows the server to authorise and accept/reject the request before large amounts of data are sent. Sending large amounts of data if the request might not be accepted is a waste of bandwidth and ties up the server in reading data that it will just discard.
Vert.x allows you to set a continueHandler on the
client request object
This will be called if the server sends back a Status: 100 (Continue) response to signify that it is ok to send
the rest of the request.
This is used in conjunction with writeHead to write the head of the request.
Here’s an example:
client.request(HttpMethod.PUT, "some-uri")
.onSuccess(request -> {
request.response().onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});
request.putHeader("Expect", "100-Continue");
request.continueHandler(v -> {
// OK to send rest of body
request.write("Some data");
request.write("Some more data");
request.end();
});
request.writeHead();
});On the server side a Vert.x http server can be configured to automatically send back 100 Continue interim responses
when it receives an Expect: 100-Continue header.
This is done by setting the option setHandle100ContinueAutomatically.
If you’d prefer to decide whether to send back continue responses manually, then this property should be set to
false (the default), then you can inspect the headers and call writeContinue
to have the client continue sending the body:
httpServer.requestHandler(request -> {
if (request.getHeader("Expect").equalsIgnoreCase("100-Continue")) {
// Send a 100 continue response
request.response().writeContinue();
// The client should send the body when it receives the 100 response
request.bodyHandler(body -> {
// Do something with body
});
request.endHandler(v -> {
request.response().end();
});
}
});You can also reject the request by sending back a failure status code directly: in this case the body should either be ignored or the connection should be closed (100-Continue is a performance hint and cannot be a logical protocol constraint):
httpServer.requestHandler(request -> {
if (request.getHeader("Expect").equalsIgnoreCase("100-Continue")) {
//
boolean reject = true;
if (reject) {
// Reject with a failure code and close the connection
// this is probably best with persistent connection
request.response()
.setStatusCode(405)
.putHeader("Connection", "close")
.end();
} else {
// Reject with a failure code and ignore the body
// this may be appropriate if the body is small
request.response()
.setStatusCode(405)
.end();
}
}
});The HttpConnection offers an API to deal with HTTP connection events, lifecycle
and settings.
HTTP/1.x implements partially the HttpConnection API.
HTTP/2 implements fully the HttpConnection API.
HTTP/3 implements partially the HttpConnection API.
The Javadoc indicates the level of support for every supported protocol.
The connection method returns the request connection on the server:
HttpConnection connection = request.connection();A connection handler can be set on the server to be notified of any incoming connection:
HttpServer server = vertx.httpServerBuilder()
.with(httpConfig)
.withConnectHandler(connection -> {
System.out.println("A client connected");
})
.build();The connection method returns the request connection on the client:
HttpConnection connection = request.connection();A connection handler can be set on a client builder to be notified when a connection has been established happens:
vertx
.httpClientBuilder()
.with(config)
.withConnectHandler(connection -> {
System.out.println("Connected to the server");
})
.build();The configuration of multiplexed HTTP connection is configured by the HttpSettings object.
Each endpoint must respect the settings sent by the other side of the connection.
When a connection is established, the client and the server exchange initial settings. Initial settings are configured by
-
Http2ClientConfig#setInitialSettingson the client andHttp2ServerConfig#setInitialSettingson the server. -
Http3ClientConfig#setInitialSettingson the client andHttp3ServerConfig#setInitialSettingson the server.
HttpSettings settings = connection.remoteSettings();
// HTTP/2
Integer http2MaxFrameSize = settings.get(Http2Settings.MAX_FRAME_SIZE);
// HTTP/3
Long http3MaxFieldSectionSize = settings.get(Http3Settings.MAX_FIELD_SECTION_SIZE);HTTP server and client support graceful shutdown.
Calling shutdown initiates the shut-down phase whereby the server or client are given the opportunity to perform clean-up actions.
-
A standalone HTTP server unbinds
-
A shared HTTP server is removed from the set of accepting servers
-
An HTTP client refuses to send any new requests
When all connections inflight requests are processed, the server or client is then closed.
In addition, HTTP/2 and HTTP/3 connections send a GOAWAY frame to signal the remote endpoint that the connection
cannot be used anymore.
server
.shutdown()
.onSuccess(res -> {
System.out.println("Server is now closed");
});Shutdown waits until all sockets are closed or the shutdown timeout fires. When the timeout fires, all sockets are forcibly closed.
Each opened HTTP connections is notified with a shutdown event, allowing to perform cleanup before the actual connection is closed.
server.connectionHandler(conn -> {
conn.shutdownHandler(v -> {
// Perform clean-up
});
});The default shut-down timeout is 30 seconds, you can override the timeout
server
.shutdown(60, TimeUnit.SECONDS)
.onSuccess(res -> {
System.out.println("Server is now closed");
});Connection close closes the connection:
-
it closes the socket for HTTP/1.x
-
a shutdown with no delay for HTTP/2 and HTTP/3, the
GOAWAYframe will still be sent before the connection is closed
|
Note
|
a close is equivalent to a connection shutdown without a grace period |
The closeHandler notifies when a connection is closed.
You can share an HTTP client between multiple verticles or instances of the same verticle. Such client should be created outside of a verticle otherwise it will be closed when the verticle that created it is undeployed
HttpClientConfig config = new HttpClientConfig()
.setShared(true);
HttpClientAgent client = vertx.createHttpClient(config);
vertx.deployVerticle(() -> new AbstractVerticle() {
@Override
public void start() throws Exception {
// Use the client
}
}, new DeploymentOptions().setInstances(4));You can also create a shared HTTP client in each verticle:
vertx.deployVerticle(() -> new AbstractVerticle() {
HttpClientAgent client;
@Override
public void start() {
// Get or create a shared client
// this actually creates a lease to the client
// when the verticle is undeployed, the lease will be released automaticaly
client = vertx.createHttpClient(new HttpClientConfig()
.setShared(true)
.setName("my-client"));
}
}, new DeploymentOptions().setInstances(4));The first time a shared client is created it will create and return a client. Subsequent calls will reuse this client and create a lease to this client. The client is closed after all leases have been disposed.
By default, a client reuses the current event-loop when it needs to create a TCP connection. The HTTP client will therefore randomly use event-loops of verticles using it in a safe fashion.
You can assign a number of event loop a client will use independently of the client using it
vertx.deployVerticle(() -> new AbstractVerticle() {
HttpClientAgent client;
@Override
public void start() {
// The client creates and use two event-loops for 4 instances
client = vertx.createHttpClient(
new HttpClientConfig()
.setShared(true)
.setName("my-client"),
new PoolOptions().setEventLoopSize(2));
}
}, new DeploymentOptions().setInstances(4));When several HTTP servers listen on the same port, vert.x orchestrates the request handling using a round-robin strategy.
Let’s take a verticle creating an HTTP server such as:
vertx.createHttpServer().requestHandler(request -> {
request.response().end("Hello from server " + this);
}).listen(8080);This service is listening on the port 8080.
So, when this verticle is instantiated multiple times as with: deploymentOptions.setInstances(2), what’s happening ?
If both verticles bound to the same port, you would receive a socket exception.
Fortunately, vert.x is handling this case for you.
When you deploy another server on the same host and port as an existing server it doesn’t actually try and create a new server listening on the same host/port. It binds only once to the socket. When receiving a request it calls the server handlers following a round-robin strategy.
Let’s now imagine a client calling multiples times the server.
Vert.x delegates the requests to one of the server sequentially:
Hello from i.v.e.h.s.HttpServerVerticle@1
Hello from i.v.e.h.s.HttpServerVerticle@2
Hello from i.v.e.h.s.HttpServerVerticle@1
Hello from i.v.e.h.s.HttpServerVerticle@2
...
Consequently, the servers can scale over available cores while each Vert.x verticle instance remains strictly single threaded, and you don’t have to do any special tricks like writing load-balancers in order to scale your server on your multi-core machine.
You can bind on a shared random ports using a negative port value, the first bind will pick a port randomly, subsequent binds on the same port value will share this random port.
vertx.createHttpServer().requestHandler(request -> {
request.response().end("Hello from server " + this);
}).listen(-1);If you’re creating http servers and clients from inside verticles, those servers and clients will be automatically closed when the verticle is undeployed.
For debugging purposes, network activity can be logged.
On the server:
HttpServerConfig config = new HttpServerConfig()
.setLogConfig(new LogConfig()
.setEnabled(true));
HttpServer server = vertx.createHttpServer(config);On the client
HttpClientConfig config = new HttpClientConfig()
.setLogConfig(new LogConfig()
.setEnabled(true));
HttpClientAgent client = vertx.createHttpClient(config);See the chapter on logging network activity for a detailed explanation.
Vert.x http servers can be configured to use SNI in exactly the same way as {@linkplain io.vertx.core.net net servers}.
Vert.x http client will present the actual hostname as server name during the TLS handshake.
HA PROXY protocol provides a convenient way to safely transport connection information such as a client’s address across multiple layers of NAT or TCP proxies.
HA PROXY protocol can be enabled by setting the option setUseProxyProtocol
and adding the following dependency in your classpath:
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-haproxy</artifactId>
<!--<version>Should align with netty version that Vert.x uses</version>-->
</dependency>HttpServerConfig config = new HttpServerConfig();
config
.getTcpConfig()
.setUseProxyProtocol(true);
HttpServer server = vertx.createHttpServer(config);
server.requestHandler(request -> {
// Print the actual client address provided by the HA proxy protocol instead of the proxy address
System.out.println(request.remoteAddress());
// Print the address of the proxy
System.out.println(request.localAddress());
});HTTP/2 connection ping is useful for determining the connection round-trip time or check the connection
validity: ping sends a PING frame to the remote
endpoint:
Buffer data = Buffer.buffer();
for (byte i = 0;i < 8;i++) {
data.appendByte(i);
}
connection
.ping(data)
.onSuccess(pong -> System.out.println("Remote side replied"));Vert.x will send automatically an acknowledgement when a PING frame is received,
an handler can be set to be notified for each ping received:
connection.pingHandler(ping -> {
System.out.println("Got pinged by remote side");
});The handler is just notified, the acknowledgement is sent whatsoever. Such feature is aimed for implementing protocols on top of HTTP/2.
|
Note
|
this only applies to the HTTP/2 protocol |
An HTTP/2 server is protected against RST flood DDOS attacks (CVE-2023-44487): there is an upper bound to the number of RST
frames a server can receive in a time window. The default configuration sets the upper bound to 200 for a duration of
30 seconds.
You can use setRstFloodMaxRstFramePerWindow and setRstFloodWindowDuration to override these settings.
HTTP/2 settings can be changed at any time after the connection is established:
connection.updateSettings(new Http2Settings().setMaxConcurrentStreams(100));As the remote side should acknowledge on reception of the settings update, it’s possible to give a callback to be notified of the acknowledgment:
connection
.updateSettings(new Http2Settings().setMaxConcurrentStreams(100))
.onSuccess(v -> System.out.println("The settings update has been acknowledged "));Conversely, the remoteSettingsHandler is notified
when the new remote settings are received:
connection.remoteSettingsHandler(settings -> {
System.out.println("Received new settings");
});|
Note
|
this only applies to the HTTP/2 protocol, HTTP/3 settings are set initially and never changed |
WebSockets are a web technology that allows a full duplex socket-like connection between HTTP servers and HTTP clients (typically browsers).
Vert.x supports WebSockets on both the client and server-side.
There are two ways of handling WebSockets on the server side.
The first way involves providing a webSocketHandler
on the server instance.
When a WebSocket connection is made to the server, the handler will be called, passing in an instance of
ServerWebSocket.
server.webSocketHandler(webSocket -> {
System.out.println("Connected!");
});By default, the server accepts any inbound WebSocket.
You can set a WebSocket handshake handler to control the outcome of a WebSocket handshake, i.e. accept or reject an incoming WebSocket.
server.webSocketHandshakeHandler(handshake -> {
authenticate(handshake.headers(), ar -> {
if (ar.succeeded()) {
if (ar.result()) {
// Terminate the handshake with the status code 101 (Switching Protocol)
handshake.accept();
} else {
// Reject the handshake with 401 (Unauthorized)
handshake.reject(401);
}
} else {
// Will send a 500 error
handshake.reject(500);
}
});
});|
Note
|
the WebSocket will be automatically accepted after the handler is called unless the WebSocket’s handshake has been set |
The second way of handling WebSockets is to handle the HTTP Upgrade request that was sent from the client, and
call toWebSocket on the server request.
server.requestHandler(request -> {
switch (request.path()) {
case "/myapi":
Future<ServerWebSocket> fut = request.toWebSocket();
fut.onSuccess(ws -> {
// Do something
});
break;
default:
// Reject
request.response().setStatusCode(400).end();
break;
}
});The ServerWebSocket instance enables you to retrieve the headers,
path, query and
URI of the HTTP request of the WebSocket handshake.
e Vert.x WebSocketClient supports WebSockets.
You can connect a WebSocket to a server using one of the `link:../../apidocs/io/vertx/core/http/WebSocketClient.html#connect-int-java.lang.String-java.lang.String-[connect]` operations.
The returned future will be completed with an instance of `link:../../apidocs/io/vertx/core/http/WebSocket.html[WebSocket]` when the connection has been made:
WebSocketClient client = vertx.createWebSocketClient();
client
.connect(80, "example.com", "/some-uri")
.onSuccess(ws -> {
ws.textMessageHandler(msg -> {
// Handle msg
});
System.out.println("Connected!");
});en connecting from a non Vert.x thread, you can create a ClientWebSocket, configure its handlers and
then connect to the server:
[source,java] ---- WebSocketClient client = vertx.createWebSocketClient();
client .webSocket() .textMessageHandler(msg → { // Handle msg }) .connect(80, "example.com", "/some-uri") .onSuccess(ws → { System.out.println("Connected!"); }); ----
By default, the client sets the origin header to the server host, e.g http://www.example.com. Some servers will refuse
such request, you can configure the client to not set this header.
WebSocketConnectOptions options = new WebSocketConnectOptions()
.setHost(host)
.setPort(port)
.setURI(requestUri)
.setAllowOriginHeader(false);
client
.connect(options)
.onSuccess(ws -> {
System.out.println("Connected!");
});You can also set a different header:
WebSocketConnectOptions options = new WebSocketConnectOptions()
.setHost(host)
.setPort(port)
.setURI(requestUri)
.addHeader(HttpHeaders.ORIGIN, origin);
client
.connect(options)
.onSuccess(ws -> {
System.out.println("Connected!");
});|
Note
|
older versions of the WebSocket protocol use sec-websocket-origin instead
|
If you wish to write a single WebSocket message to the WebSocket you can do this with
writeBinaryMessage or
writeTextMessage :
Buffer buffer = Buffer.buffer().appendInt(123).appendFloat(1.23f);
webSocket.writeBinaryMessage(buffer);
// Write a simple text message
String message = "hello";
webSocket.writeTextMessage(message);If the WebSocket message is larger than the maximum WebSocket frame size as configured with
setMaxFrameSize
then Vert.x will split it into multiple WebSocket frames before sending it on the wire.
A WebSocket message can be composed of multiple frames. In this case the first frame is either a binary or text frame followed by zero or more continuation frames.
The last frame in the message is marked as final.
To send a message consisting of multiple frames you create frames using
WebSocketFrame.binaryFrame
, WebSocketFrame.textFrame or
WebSocketFrame.continuationFrame and write them
to the WebSocket using writeFrame.
Here’s an example for binary frames:
WebSocketFrame frame1 = WebSocketFrame.binaryFrame(buffer1, false);
webSocket.writeFrame(frame1);
WebSocketFrame frame2 = WebSocketFrame.continuationFrame(buffer2, false);
webSocket.writeFrame(frame2);
// Write the final frame
WebSocketFrame frame3 = WebSocketFrame.continuationFrame(buffer2, true);
webSocket.writeFrame(frame3);In many cases you just want to send a WebSocket message that consists of a single final frame, so we provide a couple
of shortcut methods to do that with writeFinalBinaryFrame
and writeFinalTextFrame.
Here’s an example:
webSocket.writeFinalTextFrame("Geronimo!");
// Send a WebSocket message consisting of a single final binary frame:
Buffer buff = Buffer.buffer().appendInt(12).appendString("foo");
webSocket.writeFinalBinaryFrame(buff);To read frames from a WebSocket you use the frameHandler.
The frame handler will be called with instances of WebSocketFrame when a frame arrives,
for example:
webSocket.frameHandler(frame -> {
System.out.println("Received a frame of size!");
});Use close to close the WebSocket connection when you have finished with it.
The WebSocket instance is also a ReadStream and a
WriteStream so it can be used with pipes.
When using a WebSocket as a write stream or a read stream it can only be used with WebSockets connections that are used with binary frames that are no split over multiple frames.
The HttpClient supports accessing HTTP/HTTPS URLs via an HTTP proxy (e.g. Squid), a SOCKS4a, or a SOCKS5 proxy.
The CONNECT protocol uses HTTP/1.x but can connect to HTTP/1.x and HTTP/2 servers.
Connecting to h2c (unencrypted HTTP/2 servers) is likely not supported by http proxies since they will support HTTP/1.1 only.
The proxy can be configured in the HttpClientConfig by setting a ProxyOptions object containing proxy type, hostname, port and optionally username and password.
Here’s an example of using an HTTP proxy:
HttpClientConfig config = new HttpClientConfig();
config.getTcpConfig()
.setProxyOptions(new ProxyOptions()
.setType(ProxyType.HTTP)
.setHost("localhost")
.setPort(3128)
.setUsername("username")
.setPassword("secret"));
HttpClientAgent client = vertx.createHttpClient(config);When the client connects to an HTTP URL, it connects to the proxy server and provides the full URL in the HTTP request, like GET http://www.somehost.com/path/file.html HTTP/1.1.
When the client connects to an HTTPS URL, it asks the proxy to create a tunnel to the remote host with the CONNECT method.
For a SOCKS5 proxy:
HttpClientConfig config = new HttpClientConfig();
config.getTcpConfig()
.setProxyOptions(new ProxyOptions()
.setType(ProxyType.SOCKS5)
.setHost("localhost")
.setPort(1080)
.setUsername("username")
.setPassword("secret"));
HttpClientAgent client = vertx.createHttpClient(config);The DNS resolution is always done on the proxy server, to achieve the functionality of a SOCKS4 client, it is necessary to resolve the DNS address locally.
ProxyOptions can also be set per request:
client.request(new RequestOptions()
.setHost("example.com")
.setProxyOptions(proxyOptions))
.compose(request -> request
.send()
.compose(HttpClientResponse::body))
.onSuccess(body -> {
System.out.println("Received response");
});|
Note
|
Client connection pooling is aware of proxies (including authentication). Consequently, two requests to the same host through different proxies do not share the same pooled connection. |
You can use setNonProxyHosts to configure a list of host bypassing the proxy.
The list accepts * wildcard for matching domains:
HttpClientConfig config = new HttpClientConfig();
config.getTcpConfig()
.setProxyOptions(new ProxyOptions()
.setType(ProxyType.SOCKS5)
.setHost("localhost").setPort(1080)
.setUsername("username")
.setPassword("secret"))
.addNonProxyHost("*.foo.com")
.addNonProxyHost("localhost");
HttpClientAgent client = vertx.createHttpClient(config);By default, a 10 seconds connection timeout is set for the proxy handler in the Vert.x HTTP client. If the target server takes longer than that to accept the connection, or if the proxy is too busy and delays completion of the handshake with the client, you might increase this timeout:
proxyOptions.setConnectTimeout(Duration.ofSeconds(60));The HTTP proxy implementation supports getting ftp:// urls if the proxy supports that.
When the HTTP request URI contains the full URL then the client will not compute a full HTTP url and instead use the full URL specified in the request URI:
HttpClientConfig config = new HttpClientConfig();
config.getTcpConfig()
.setProxyOptions(new ProxyOptions()
.setType(ProxyType.HTTP));
HttpClientAgent client = vertx.createHttpClient(config);
client
.request(HttpMethod.GET, "ftp://ftp.gnu.org/gnu/")
.compose(request -> request.send())
.onSuccess(response -> {
System.out.println("Received response with status code " + response.statusCode());
});