1

Priority & Dependency:

Here I made I simple test. But the result seems not so good. I tried to make 100 request in a for loop in the same connection(the request url is the same, I am wondering whether this part influence the results).

If the index is i, then my request stream_id is i while the dependent stream_id is 100+i. If our assumption is right, the request can never get response because there is no stream_id from 101 to 200.

But the results shows there is no difference for setting the dependency and not. I got the response data frame one by one without timeout or waiting. And also some other related test, the start point is to let the stream which depends on other stream to be sent first and the stream dependent later. But the result is same. I am still thinking the reason of the results. Can anyone help me? Many thanks.

Code here:

public void run() throws Exception
{
    host = "google.com";
    port = 443;
    //client init
    HTTP2Client client = new HTTP2Client();
    SslContextFactory sslContextFactory = new SslContextFactory(true);
    client.addBean(sslContextFactory);
    client.start();       

    //connect init
    FuturePromise<Session> sessionPromise = new FuturePromise<>();
    client.connect(sslContextFactory, new InetSocketAddress(host, port), new ServerSessionListener.Adapter(), sessionPromise);
    Session session = sessionPromise.get(10, TimeUnit.SECONDS);

    //headers init
    HttpFields requestFields = new HttpFields();
    requestFields.put("User-Agent", client.getClass().getName() + "/" + Jetty.VERSION);

    final Phaser phaser = new Phaser(2);

    //multiple request in one connection
    for(int i=0;i<100;i++)
    {
        MetaData.Request metaData = new MetaData.Request("GET", new HttpURI("https://" + host + ":" + port + "/"), HttpVersion.HTTP_2, requestFields);
        PriorityFrame testPriorityFrame = new PriorityFrame(i, 100+i, 4, true);
        HeadersFrame headersFrame = new HeadersFrame(0, metaData, testPriorityFrame, true);

        //listen header/data/push frame
    session.newStream(headersFrame, new Promise.Adapter<Stream>(), new Stream.Listener.Adapter()
    {
        @Override
        public void onHeaders(Stream stream, HeadersFrame frame)
        {
            System.err.println(frame+"headId:"+frame.getStreamId());
            if (frame.isEndStream())
                phaser.arrive();
        }

        @Override
        public void onData(Stream stream, DataFrame frame, Callback callback)
        {
            System.err.println(frame +"streamid:"+ frame.getStreamId());
            callback.succeeded();
            if (frame.isEndStream())
                phaser.arrive();
        }

        @Override
        public Stream.Listener onPush(Stream stream, PushPromiseFrame frame)
        {
            System.err.println(frame+"pushid:"+frame.getStreamId());
            phaser.register();
            return this;
        }


    });
    }
    phaser.awaitAdvanceInterruptibly(phaser.arrive(), 5, TimeUnit.SECONDS);

    client.stop();
}

1 Answer 1

1

The Jetty project did not implement (yet) HTTP/2 request prioritization.

We are discussing whether this is any useful for a server, whose concern is to write back the responses as quick as it can.

Having one client changing its mind on the priority of the requests, or making a request knowing that in reality it first wanted another request served, it's a lot of work for the server that in the meantime has to serve the other 10,000 clients connected to it.

By the time we the server has recomputed the priority tree for the dependent requests, it could have probably have served the requests already.

By the time the client realizes that it has to change the priority of a request, the whole response for it could already be in flight.

Having said that, we are certainly interested in real world use cases where request prioritization performed by the server yields a real performance improvement. We just have not seen it yet.

I would love to hear why you are interested in request prioritization and how you are leveraging it. Your answer could be a drive for the Jetty project to implement HTTP/2 priorities.

Sign up to request clarification or add additional context in comments.

6 Comments

Because My research interests is network security. I am wondering some hackers may use the features of priority of HTTP/2 to attack the server. So I want to use the Jetty client to do such test. By the way, I saw the PriorityFrame, in the javadoc 9.3.3, why you say you still haven't implement it. Another thing is that you said by the time the client realizes that it has to change the priority of a request, the whole response for it could already be in flight.Can I use Jetty to build multiple specific requests with some relations ,just like my tests.
The PriorityFrame is there because the server must parse it, but it is immediately discarded and no priority logic is performed (yet). You can use the Jetty HTTP2Client to build requests with priorities and dependencies, but you need to find a server that implements that to trigger the behavior, for which I suggest nghttpx. Feel free to join the Jetty mailing list for further discussions.
If I understood right, you mean that I can use the HTTP2Client to build requests with priorities and dependencies, and it seems I need to use the PriorityFrame to build the dependencies. But you also said there is no priority logic. So even if I build the request with priorities and dependencies, there is still no use. Am I right? I also tried the code to make request to h2o server which support priority and dependency. But the results remains the same.
Yes you can use HTTP2Client to build requests with priorities and dependencies. When I said "there is no priority logic" I was talking about the server. The client does not have any logic apart sending the frames. What result remains the same when you talk to H2O ?
I see, I mean when I use the code above to talk to h2o, there is no difference for setting the dependency and not. I got the data frame one by one without timeout or waiting. I guess your explanation ( By the time we ...... have probably have served the requests already. By the time the client ....... it could already be in flight.) is right. If I use several threads to send the request at the same time, do you think the results may be different? I am not clear how the priority being implemented in both client and server. Do you have any recommended material for this? Many Thanks.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.