1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

tomato Qos

Discussion in 'Tomato Firmware' started by rhine2, Jan 25, 2009.

  1. rhine2

    rhine2 Addicted to LI Member

    I have a few requests and queries:
    1. How come the % of BW of different classes adds more than 100%? Is this a bug? Can someone explain to me how this works in terms of the % window e.g. 80%-100%?

    2. is layer-7 and pp2p mutually exclusive i.e. when one works, the other doesnt? Again, please explain as to how these work with respect to Qos?

    3. If I have broadcom GPL source ONLY, what minimal changes I need to port Qos to that code? Has any one done this? I would appreciate if someone can help here. I just need to port the Qos feature and nothing else.

    regards
     
  2. Planiwa

    Planiwa LI Guru Member

    Please correct the following:

    When a server becomes available, higher ranking clients may preempt lower-ranking ones, and lower-ranking clients may be served by otherwise idle higher-queue servers, so long as:

    Code:
    1. The guaranteed minimal proportion of servers available, even if higher class clients are waiting for service, is MIN(class).
    
    2. The allowable maximum proportion of clients of each class served simultaneously, even if idle servers for other class queues are waiting, is MAX(class).
    Thus, a pair of proportions characterizes the entitlement of each class: MIN%-MAX%.

    The first number is the minimum entitlement, and this can easily be understood in terms of dividing and reserving the total lanes, queues, servers, etc. among the classes. (Then obviously the parts cannot exceed the whole.) This allocation is fairer to the lower classes than a simple priority-preemption rule.

    The second number is a limit on reverse-premption, i.e. the degree to which lower class clients may make use of otherwise idle higher class servers. Thus, if there are 10 express queues for shoppers with fewer than 6 items, all idle, and 10 shoppers appear, each with 100 items, the express queues cannot simply accommodate the bulk shoppers, since during the long service-time of the bulk shoppers, express shoppers could arrive and not be served promptly. Thus, the second number is a consequence of service-time.


    The above is concise and simple, but not quite correct, since there really is only one lane. Or, strictly speaking, one fast "down" lane, and one very slow up lane.

    So, the whole process is linearized, packet by packet. Consider that the smallest packet is about 50 bytes, and the largest about 1500. You can see how, while the large packet is moving, 30 of those small packets are blocked, no matter how high their priority. That's the effect of service-time.


    This linearization is not a big conceptual problem. When looking at the tasty QOS pie charts there are 3 conceptual problems:

    1. In the bottom chart, the 100% is *not* 100% of bandwidth unless the channel is saturated. This could easily be fixed by including a white slice in the pie that represents unused bandwidth.

    The problems with the top pie are more complex.

    2. Similarly to #1, 100% connections has little meaning when there are only 5 connections. This problem could be addressed by letting the user specify a "normal" (absolute or relative) number of connections in the conntrack table, and using a white slice for the unused portion, up to "normal". That would be a start.

    3. But there remains a very serious problem: Those who look at the pies and think they understand: "The top shows the distribution of demand, and the bottom shows the distribution of servicing", will still remain very much misinformed, even with the "white slice normalizations". The reason is that the vast majority of the connections in the conntrack table are likely to be dead. They are not demanding service at all. They are timing out. This is a serious conceptual problem. One way to solve it could be to give the option of counting only *active* connections, and skipping connections that have been idle for [1] [2] [3] seconds.
     

Share This Page