1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Signal strength, transmitter power, overclocking, and more ...

Discussion in 'Tomato Firmware' started by CandyBoy, Jul 21, 2009.

  1. CandyBoy

    CandyBoy LI Guru Member

    I want to ask you one question. If i set Country / Region to Europe ,what is the max transmit power i can set ?
     
  2. Toastman

    Toastman Super Moderator Staff Member Member

    C/B, you can set any power up to 251 but it does not actually change if it is limited internally by the driver, according to country.

    The limit on EU is known to be low(ish) so I normally use the Singapore setting, which seems to allow maximum power and channels 1-13, so giving the widest choice.

    I just tried a very quick experiment on Victek's 1.25 (ND driver), max power was set to 251, country set to Europe. I reduced power by approximately half each time until the signal began to drop at the receiver. That was at about 20mW. No change above 42mW.

    So the answer to your question seems to be *around* 42mW for Europe (GUI setting) - tested on a WRT54GL only.
     
  3. premudriy

    premudriy LI Guru Member

    Toastman, do you know if there's a limit if setting country to US as well?
     
  4. baldrickturnip

    baldrickturnip LI Guru Member

  5. Toastman

    Toastman Super Moderator Staff Member Member

    Baldrickturnip - Tested just by setting it up, test shows you can vary from lowest up to 251 with a corresponding increase in received signal strength at a client, on WRT54GL and using Victek's tomato with ND driver. Quite probably Singapore may not actually allow this :wink: . But remember that Tomato was designed to deliberately allow the power to be increased. In the original driver it was possible on all channels by the use of a parameter which overrode the wireless driver's own defaults. But this override no longer works with the "New Drivers" and we are left with a somewhat half-baked solution.

    I tested every country in the selection box to find settings that allowed the highest power levels, and settled on Singapore instead of the US.

    I also read all of the bumph from these certification authorities, but in practice they seem to be mostly out of date, or not adhered to, or both, I reckon. Take absolutely no notice of any of them, and test it yourself. It's a real mess to figure out what is going on, not helped by the fact that drivers are also allowed to vary transmit power depending on the signal level of the client, and what link speed is negotiated. This may vary from manufacturer to manufacturer of wireless card, and may explain why received signal levels often jump around a lot.

    See this post & thread for another source http://www.linksysinfo.org/forums/showpost.php?p=342577&postcount=97
     
  6. wasp87

    wasp87 Network Guru Member

    Thanks for your insight Toastman, so many posts from you are interesting reads. You're a true warrior. I have a question though, by the look of the "Max Power Level" page you linked (http://www.cisco.com/en/US/docs/wireless/access_point/1200/vxworks/configuration/guide/bkscgaxa.html) it looked to me like Taiwan had the highest max power level and Singapore was even lower than the U.S.... Am I reading their charts wrong?


    Also I see a "Worldwide" option that allows all 13 channels to be used aswell. What kind of limitations does this have?....
     
  7. Toastman

    Toastman Super Moderator Staff Member Member

    You must ignore online documentation when it seems at variance with the facts :)

    Quickly, the info I linked to is (was?) the permitted levels in those countries. I found many such documents but no two were the same. I wish such documents would be clearly labelled with a date! That's the problem with WWW data, most of it is NOT marked with any date - anywhere. So much of the data on the web is almost useless. This always makes me angry ... rant over.

    Anyway, my comments refer to findings made after tests on the actual firmware, which was designed to allow the power to be increased above levels that may not be legal in some of those countries.

    The Worldwide setting with ND drivers seems to have a low(ish) power limit in the driver, as do many countries. I believe I was able to get several dB increase by using Singapore setting. As I said above in #168, I did test every option in the menu before I committed to using Singapore. Unfortunately, I can't find all my old notes, they've probably been eaten by the cat - but I don't remember Taiwan being a useful alternative either when I decided to pick Singapore. Japan has 14 channels but the power output was set quite low, seemingly on all channels. A good option for a default, which is probably why Jon Zarate picked it - remember, he lived in Japan.

    EDIT - Even newer routers in 2015 seem to give strongest signals with the Singapore setting.
     
  8. jsmiddleton4

    jsmiddleton4 Network Guru Member

    Just because you can do something doesn't mean you should do something. Increasing signal strength does not equal better performance. At some point it means crappier performance.
     
  9. Toastman

    Toastman Super Moderator Staff Member Member

  10. wasp87

    wasp87 Network Guru Member

    My guess is because "It'll create higher signal noise and shorten chipset life."
     
  11. Toastman

    Toastman Super Moderator Staff Member Member

    Yes, it always comes back to this old chestnut :biggrin:

    But I wonder where it originated? I have yet to find, even on the forums, a clear case of a WT54GL being killed by running higher power, though I will never deny the possibility of a normal failure occurring coincidentally after such an increase.

    Perhaps this topic deserves a thread of its own, I'm rather loathe to start one because I am rather weary of the whole subject. Still, I'm off to the shops, perhaps my car will explode when I get into it because of the high pressure on the seat... :D
     
  12. mstombs

    mstombs Network Guru Member

    Re poor performance

    If you turn the volume on your hifi up too loud the sound will become distorted, similar will happen to wireless - you need good Signal to Noise ratio, likely to be somewhere lower than maximum. It is always better to use a high gain antenna which will increase reception sensitivity as well as transmission - the wireless link needs to be bi-directional.

    Re Hardware life

    It is well know that old RF amplifier power stages can be damaged by running without an antenna connected - imagine you are holding the end of a long rope and sending a wave down the length - then drop the rope and your arm tries to fall off!

    BUT: any decent modern hardware should have inbuilt protection against voltage/current/thermal overload - but running anything at maximum could shorten its life!
     
  13. Toastman

    Toastman Super Moderator Staff Member Member

    Is the WRT54GL really capable of more output ?


    Firmware

    The original Linksys firmware is said to have had the transmit power level fixed at 28mW. Since many people wanted to increase the power for various reasons, probably to annoy the doomsday brigade, most third party firmware allows an increase in the level. The aim of most mods is to allow the use of full power on all channels.

    Tomato firmware, like most others, has a power setting selection up to 251. This is usually taken to mean the transmit power output in milliwatts, but I believe this should sensibly be viewed not as the actual transmit power output level, but only a relative number. The actual power output will often be considerably less than this figure, and is dependent on the wireless driver version in use. What that level actually is, isn’t easy to determine without proper equipment. So far, I haven't found any article in which it has been tested by an accurate method and a result posted which can be taken as authoritative. This chart from Linksys' FCC testing regarding the WRT54G which has a similar output stage was all I ever found:

    WRT Series.JPG

    Different versions of the wireless driver result in different output levels, as the country selection and some other parameters can prevent operation at the full power and set internal maximum defaults. In the original Linksys/Tomato driver, an override parameter existed which would allow full power operation regardless of this, but this override no longer functions in the later "ND" drivers. No source code is available for the drivers, so we cannot investigate further.

    Hardware

    This router has a Broadcom BCM2050 wireless chipset driving an external SiGe 2528L power amplifier.

    http://www.sige.com/uploads/briefs/D...ep-10-2008.pdf

    The BCM2050's transmitter can provide +5dBm (3.16 mW) output. To increase this power, an external SiGe 2528L three-stage amplifier chip is used. The chip has internal temperature compensation and can also withstand a high level of antenna mismatch. In common with most modern devices, it is quite rugged. I could not find any figure for heat dissipation, but bearing in mind that the chip has a relatively low duty cycle, it does not get very hot even at the "full" setting of 251. The data sheet does not suggest that a heat sink is necessary. Nor would there be any practical way to attach one due to the chip size and packaging. The tried and tested "finger" method results in no burns, it is possible to leave one's finger on the chip without any pain. [Here I refer readers to consider the many recent laptops in which the graphics chips get so hot that the solder is melted and the chips drop off].

    According to the data sheets, and in accordance with normal practice, the PA is switched off when not in use by a pulsing an “enable” input. Therefore it can produce no thermal noise when not transmitting. This is so that noise generated in these three stages does not increase the noise floor at the input to the router’s own receiver.

    Many people who insist that the router will "only produce more noise" if the power is turned up refer to the noise measurement on the router itself. This is not an indicator, because the PA is switched off when receiving. Your transmitter cannot cause an increase in noise figure on the router's own receiver. If your router’s noise reading changes, it must therefore be due to something else.

    The noise floor of a channel is the amount of noise that the receiver sees on that channel when a client is not transmitting. This noise can come from of many different sources. Noise can of course be created by our own transmitter, but what you should look for to check, would be an increase in noise at other nearby receivers on the same or adjacent channels.

    The Crunch ...

    The 2528L has a maximum power output, according to the data sheet, of 24dBm/251mW (mode b) and 21dBm/125mW (mode g).

    So let us also dispel the myth that the router will fail if used at a setting of 251. It was actually designed to run at a maximum power level of 251 mW, and this is why that figure was originally patched into the firmware. However, it is my belief that this router never does not actually reach this level, especially in mode (g). If my feeling about this is correct, this would make the router even less likely to suffer damage.

    Tests

    So let's see what happens in practice when the transmit power is varied, using Victek's mod 1.25 version 8515ND (which uses the 4.158.4.0 driver) and the country set to Singapore.

    I used the normal antennas on the WRT54GL, and a D-Link DWA110 USB adapter at the client. I selected channel 6, with only one other access point noted at a level of -88 dbm +/- 3dB approx. To avoid any misleading results caused by receiver overload, I used a line of sight path to the client at a distance of 20m. Since the intention of this test was only to verify operation at levels up to 251, I did not try a greater range and a low signal level.

    I decided not to try to measure download speed because this depends on so many other factors, but instead, the error rate (packet loss) of the download. Before beginning the tests, with the client switched off, I recalibrated the firmware's noise floor level, which on this channel was -95 dbm. Signal levels reported by the DWA110 were found to closely follow the readings from another WRT54GL, also calibrated, so these measurements are something everyone can relate to. Bear in mind they are only approximate figures calculated by the firmware from information supplied by the wireless hardware, and that every wireless card is different.

    I used mode (b) to begin, at a power setting of 10. The client associated at 11Mbps at -66dbm and a signal to noise ratio of 29dB. I started download of a large file and noted the level of errors at different settings. I checked settings of 10, 20, 42, 84, 150, and 251. There was no significant difference in the mean error rate at any setting. Noise floor stayed constant at -95dbm, the signal to noise ratio improved steadily until it reached 42dB at a setting of 251 and -53dbm. During these tests, I kept two WRT54GL’s on channels 1 and 11 running 2m from the router, which showed no change in their respective noise floors. I also switched one onto channel 6 at intervals to measure any noise floor change - there was no increase in noise level. The link connection speed remained at 11Mbps during the tests.

    Mode (g) connection at 20m was initiated at 54Mbps at a power level of 10mW, -70dBm, s/n ratio 25dB - quite a good signal. Using the same steps in power, there was little difference in the error rate at any speed, reaching -58dbm, s/n ratio 37dB. It did in fact improve slightly with settings over 150. The link connection speed remained at 54Mbps during the tests.

    Placing the client next to the router, at a distance of under 1m resulted in a high number of errors. Watching the error rate while moving the client machine showed the error rate increasing as it approached closer than about 2.5-3m to the router. This indicates receiver overload, and is a probable cause of some of the posts in the forums where people have found increased power to result in lower throughput.

    *

    It is interesting to note that the reported signal strength on mode (b) was about 4dB lower than in (b) mode. So the router does seem to be adhering to the datasheet which indicates that maximum power should be 125mW in mode (g).

    Conclusion

    These quick tests confirmed that there isn’t usually any problem in running a WRT54GL at any level you like. You are very unlikely to have any problems. I’ve always kept mine at or under the 150 setting, purely to be on the safe side, and I have never had any problems with overheating, increases in noise floor, or appearances of demons during the night. If you believe you have a situation where increasing transmit power might help, perhaps with the addition of a better wireless adapter at the client end, don't be put off by the merchants of doom. I have some clients who live in "blindspots" (rooms behind the lift shafts, for example) and they benefit immensely from the increased power levels on the router, and especially when using something like the TP-Link 200mW USB adapter with external 4dBi antenna.

    http://www.tp-link.com/products/prod...el=TL-WN422G

    Your mileage may vary, but if it does, keep an open mind, don't panic, and look carefully for the reason.

    And please note that being closer than maybe 2m to the router may result in receiver overload and increase the error rate.

    Additional teaser

    When setting up a bunch of AP's for a new project, I always test them in my apartment for a week before installation. It so happens that I have just bought the hardware for a new installation. So, just to see what will happen, I put 28 new AP's on the same channel all running at a setting of 251. The noise floor understandably suffered a little, -91 instead of a more normal -94/96. However, I am still able to associate with the AP's in this residential block - even the weaker ones - on the same channel - and send this post

    QED.


    EDIT - December 2009

    It is now over one year since I installed over 200 routers here in different blocks, all running 150mW and overclocked to 250MHz. Not a single failure. There are now over 350, I don't have an exact count. My only router failure so far has been bricked due to an experiment in trying to upload backup data from different router. Mea culpa.

    EDIT - August 2015

    There are a large number of routers and access points now on the market with up to 1 watt transmit power. This level can usually be enabled when using the "COUNTRY=SINGAPORE" setting. Range and penetration of concrete walls increases enormously.
     
  14. jsmiddleton4

    jsmiddleton4 Network Guru Member

    "Why"

    As far as I know this is the main answer.

    "Signal to Noise ratio"

    I do not know about shortening life span, etc. It makes sense that if you drive a thing to its maximum it will shorten its life span. If you drive your car engine at red line it will shorten its life span. It can run at that many rpm's. It should not run at that many rpm's.

    I've never run any of my routers with their respective signals boosted and waited to see how long they would last. Have boosted their signals and checked for signal to noise. Power goes up, noise goes up and while boosting the signal, the noise makes the increase boost pointless.
     
  15. wasp87

    wasp87 Network Guru Member

    Currently I'm trying out 150mW on the Singapore option as Toastman has said should be a safe option. So far I think I may have better results with 100mW but I'm still testing out. Even if the router died, I won't cry about it as they don't cost too much, and I have a backup solution to replace it immediately.
     
  16. Toastman

    Toastman Super Moderator Staff Member Member

    Point I'm trying to make is that there are circumstances when an increase is desirable, useful, and cheap.

    These chips are designed to be bombproof, and I HAVE tested them at high levels for several years. None have failed, not would I expect them to.

    What you get from me is the viewpoint of an RF engineer with many years experience in RF design and testing. I have no axe to grind, no ulterior motive, just facts and explanations. Most of them actually obey the known laws of Physics, and not voodoo. Please ignore voodoo, unless that's the way you run your life :eek:

    wasp87, it won't die, I guarantee it. If it does, I will eat my shorts. I have several hundreds of them running here. I must do a count one day...

    Is your router close to the client? If the client is overloaded, then it is logical to REDUCE the power. Usually this is why people report problems with an increase they didn't really need. If your client is within about 2.5m of the router, increasing the power may overload it's receiver.
     
    crashnburn likes this.
  17. rhester72

    rhester72 Network Guru Member

    Everyone does realize that perceived signal strength by the client and viability of the connection are unrelated, since the client has to communicate (with its own transmitter) back to the router?

    This is precisely why increasing signal strength is rarely beneficial, because the limiting factor is often the client, not the router. It may look better on a client chart when you pump up the power, but it's extremely likely that it does little to anything to solve (what is usually) the core issue.

    Better antennas help because they increase router sensitivity in addition to range.

    Rodney
     
  18. Dent

    Dent Network Guru Member

    But won't an increase in transmit power from the router help the client with downloads from the router? An increase in client transmit power (which usually cannot be done) would help with uploads to the router. With the internet though, downloads are really what matters most to most people.
     
  19. jsmiddleton4

    jsmiddleton4 Network Guru Member

    .2 on three routers. Two in WDS mode. All good.

    As far as boosting signal strength making the down load part better.... Sorry but in order to have downloads the client and the router have told eachother they are connected. In other words uploading back to the router, the send strength of the client, is as equally important as boosting the send strength of the router. It is not just a one way process to obtain a synch and lock between the client and the router as in boosting the router independent of the send signal from the client is going to do a lot. If the client send is weak the router and the client are not able to find each other and stay connected. Its not an either or, its a both and scenario.
     
  20. Toastman

    Toastman Super Moderator Staff Member Member

    New thread for the discussion on WRT54GL power :

    http://www.linksysinfo.org/forums/showthread.php?p=349643#post349643

    What is "signal-to-noise ratio" ?
    This term seems to be misunderstood by a lot of posters, particularly the "doom" merchants. They commonly refer to "signal-to-noise ratio" with a sneer as if it were some unwanted rodent.

    So what is it? What about some of the other terms used in Tomato?

    Let's use an analogy.

    Imagine you are in a room, with the TV on and several people talking. We can refer to all of this background noise as the "noise floor". It is always present, but it's level may wander up and down at any given moment. We can measure it and, thereafter refer to it using a standard of measurement called the Decibel. The same measurement is used for radio power measurements. It is usually abbreviated to dB. The dB is a logarithmic measurement. An increase of 3dB is equal to double, or 2x, the power level. So 6dB would be 4x, 9dB would be 8x, and so on.

    So let us say the background noise (the noise floor) of our room is 50dB. Now someone talks to us, and let us say that the level of his voice is measured as 70dB. This is the wanted "signal". The signal to noise ratio is therefore 70-50=20dB. You would have no problem talking to him at this level of noise, though it may be irritating.

    If the speaker then lowers his voice, to a measured level of 52dB, the signal-to-noise ratio is now 52-50=2dB. We can hardly hear him. We may hear the odd few words, but anything complicated will be lost in the general noise. You can still communicate, but the smaller the amount of information, the more likely it is to be heard. He may have to repeat himself several times to tell you anything complicated.

    So, the signal to noise ratio measurement tells us how likely it is that information will be received on a particular channel.

    By definition, an increase in signal to noise ratio can never make things worse! However, again taking our noisy room as an example, once the speaker begins to speak loudly enough that we hear 100% of what he says, then there is no point at all in his shouting any louder. Beyond a certain point, increasing the signal to noise level does not make things any better either.

    In our router, levels are given as e.g. -70dBm. Without going into specifics, just consider these reading as relative, so -70dBm is stronger than -90dBm by 20 dB.

    The router has some built-in routines to measure the "noise floor", or background noise, on the channel in use. Background noise can come from many sources, other routers, phones, microwave cookers, airport radar, etc. The lowest level of noise the router can calculate is -99, but the level will usually be somewhat less. Usually, however, it will be around -90dBm or better.

    In tomato, we are given two useful measurements. The "RSSI" figure, which is the received signal strength, and the "Quality" which is the signal-to-noise ratio. This is calculated from RSSI reading minus the the noise floor figure (which can be found at the bottom of the "Device List" page).

    Ultimately, the most important figure is really the signal to noise ratio.

    From the above, we can see that doubling the transmit power should result in an improvement in the received signal by approximately 3dB and an increase in the signal-to-noise ratio also of 3dB. If this does not happen, there are other factors coming into play.

    If, for example, the transmitter was capable of an increase in power level, but at the same time it's PA became nonlinear (similar to distortion in an audio amplifier), it begins to transmit more noise. The noise floor of the channel may be degraded by, say, 1dB, the distant receiver's signal may still increase by almost 3dB, but because the channel's noise floor has changed, the signal-to-noise ratio does not increase by 3dB, perhaps only 2dB. The distortion produced by the over-driven power amplifier also creates noise in adjacent channels and begins to affect the users of those channels, if they are close enough. The router simply was not capable of transmitting cleanly at this level, and the power should be lowered until no such adverse effects are seen.

    Now, what would happen if we simply ignored this warning, and increased the drive level even further? The received signal may increase only a small amount, if at all, because the transmitter can't actually provide any more power. However, the NOISE transmitted by the router may increase dramatically as the transmitter's power amplifier is driven into nonlinearity (distortion) - the channel's noise floor will fall further - and the signal to noise ratio will actually DECREASE. It is at this point that the link begins to suffer very badly from dropped packets and errors, and the throughput drops. Interference to other channels may be quite severe.

    I must point out that since the WRT54GL's transmitter is turned off when receiving, it cannot change the noise floor as measured on it's own receiver. Any change that you do see is quite meaningless. You must use another router or receiver to measure the noise floor.

    It can be seen therefore, that the signal to noise ratio is a very useful figure indeed. Using this figure alone, an experienced technician will often be able to make quite accurate predictions of what a link is capable of.
    .
     
  21. Toastman

    Toastman Super Moderator Staff Member Member

    Increasing Transmit Power

    This does seem to be an area which somehow causes people to break out in a sweat. I actually get quite angry about this, so please excuse me if I occasionally ridicule some of the forum posters - but it seems very difficult to otherwise get people's attention. There is just so much utter nonsense published on the web by people who have no idea whatsoever what they are talking about. Running "higher power" CAN and DOES improve your performance and depends on known laws, and not the mumbo-jumbo or magic apparently entertained by many forum posters.

    Now - first of all, let's get real about this "high power" crap. We are talking about a ridiculously smalll amount of power used in Wifi routers - I would normally even hesitate to refer to it as a transmitter - it is simply PUNY. The very term "HIGH POWER" to me conjurers up images of at least a 1,000W transmitter, cooled by high speed centrifugal fans. That is 8,000 times more power than our little 30 dollar router, OK? Even a mobile phone runs 20 times the power!

    So - we are not talking here about running "High Power". We are talking about an increase from a puny amount of power to a *slightly less* puny amount of power.

    So, I am not even telling anyone to just go and use "higher" power, that would be against sound engineering principles and create a lot of interference in an already congested band. And not to mention in your country, it may be illegal. I'm just trying to dispel some myths which seem to have taken root over the years, which are just bloody ridiculous to a communications / RF engineer.

    So I will try to explain here why so many people are talking out of their backsides.

    IMPROVING THE ANTENNA

    Of course, improving the antenna is one way to get better performance. But you must first understand how antennas can "improve" performance. Many people insist that using a higher gain/more directional antenna will always be better, as it improves data flow in both directions. But there are many occasions when using a directional aerial will not improve the signal, such as in a multi-storey building. Antennas cannot magically provide power from nowhere! Energy from other directions is concentrated in one particular direction to provide gain. It is taken AWAY from those other directions in order to concentrate it in others.

    For example, a typical add-on router antenna would be a vertically stacked collinear array of coaxial dipoles, with a relatively small gain over the standard antenna. This kind of antenna will concentrate the signal into a very flattened omnidirectional donut, reducing the signal severely both above and below the router. This may be fine for a single floor and just one, or maybe a few users, but it will actually make things much worse for everyone on floors above and below the router. This is one scenario where increasing transmit power or even better, adding additional AP's, is a better approach.

    And of course, people do seem to forget that the signal back from a client to the router can ALSO be improved by simply using a USB wireless adapter with a higher power output, and usually an external antenna connector with a 3 or 4dB gain antenna. Several of these have appeared on the market recently which provide about 200mW. Thus, the improvement can easily be made reciprocal. But be careful not to break the appropriate laws in your own country where applicable.

    There is another myth that has taken hold, due to people with no knowledge trying to explain something they don't understand themselves. And that is, that increasing power above the tomato defaults adds NOISE. This is also nonsense. The supposed "noise" is actually generated when the power amplifier in the transmitter becomes overloaded and nonlinear in operation. This may or may not happen when the level exceeds the router's defaults, depending on who set those defaults in the first place. In general, the defaults have been set very low to satisfy local regulatory requirements, and the power level can be increased considerably without the transmitter being driven into nonlinearity and producing interference or noise. If performance drops when power is increased, generally there is another, simpler, explanation.

    Now I will explain why an increase in power can improve your download speed even if the client's signal back to the router is not simultaneously improved.

    LET US FIRST CONSIDER A CASE WHERE THERE IS A FIXED SPEED IN BOTH DIRECTIONS

    There is a direct relationship between the numbers of dropped packets versus packet size. Small packets (acks) from the client are far less likely to get dropped because of poor signal or interference. The larger packets used by the router to send incoming data streams to the client, are much more subject to packet loss, and thus, we have an unequal situation. We often break down large packets into smaller ones to increase the link reliability and reassemble them at the other end. For anyone who has worked on long distance links over marginal paths, this is common practice.

    We can reduce these packet losses by improving the signal to noise ratio at the receiver. Increasing the router's transmit power does just that. It can - and does - increase the speed and reliability of downloads to your computer IF your signal was marginal.

    The replies from the client remain unaffected of course - but as stated, they are mostly small packets and subject to much less loss. Of course, if the client cannot communicate properly with the router after it has associated, then this is no longer true, but for most marginal signals a very significant benefit will often be seen. The fact that the client associated in the first place means that it can talk to the router.

    NOW - REMEMBER THAT ROUTERS
    ***NEGOTIATE***

    SPEEDS

    Actually, however, the router does not have a fixed speed in both directions. It negotiates a link speed with the local computer in both directions - if a weak signal is what you have, a lower link speed will be set up. A strong signal will support higher speeds. The transmission method and even a change from G to B speeds will be done, whatever is necessary to set up a reliable link at the new speed.

    Let us consider a PC to Router link which is fairly poor, and the link has settled at negotiated speeds of 1 Mbps UP (to the router) and 1.5Mbps DOWN (from the router). The 1Mbps uplink speed is the lowest available - it is a B speed.

    Notice anything? Most ADSL uplinks are only 256kbps to 1Mbps - so actually, this isn't really a big problem for ADSL users anyway! These speeds are perfectly adequate for our uplinks unless we are uploading files etc.

    As the signal gets even weaker, packets will begin to be dropped and the speed might fall to say 100Kbps ... but so what? It is still adequate for most browsing purposes.

    Now let us increase our router's transmit power from 42 to 150mW, about 6dB.

    Our new negotiated speeds for upload remains the same at 1Mbps, it may even be as low as our hypothetical 100Kbps, but the download improves to 14 Mbps !

    So our downloads have increased by 10 and our uploads (which are mostly small ACKS) have remained exactly the same as they were before.

    The difference seen by the average internet user, who is mostly concerned with his download speeds, is HUGE.


    I hope I have made it absolutely clear that this is a myth.


    The fact that we have not improved our upload performance is totally irrelevant - because we are only sending a small amount of data in that direction, mostly just acknowledgements that we have received packets. We don't NEED to improve it. We probably already have sufficient upload speed - but what we need is better download speed to cope with the incoming data. Yes, I know keep repeating this - in the hope that this small but somewhat important fact will sink in with some of the doom and gloom merchants ! Even those who seem quite intelligent otherwise seem to be unable to grasp this simple concept. Don't ask me why - I don't know.

    This big improvement is most easily seen by using a firmware which has simultaneous display of UP/DOWN wireless connection speeds to the clients. Such as Tomato's "Devices" page. You will see the link speeds constantly changing to cope with varying conditions. Once you have seen this, you will realise that most of the posts on this subject maintaining that increasing transmit power is pointless, are complete rubbish. If this fact isn't obvious, go back to the beginning and read it again. Keep doing it until you understand! If in the end you still don't understand, that's too bad, but it's your own problem :confused:

    To facilitate this I have added connection rate display to Victek's RAF 8515.2 and you will find a link below to download it. Teddy Bear's USBmod / USB-RT mode was the source.

    So, to recap - increasing transmit power on the router often makes a tremendous improvement on a marginal link, because:-
    • It is the DOWNLOADS from the router that are the largest size packets, and the ones most likely to suffer losses.
    • Whereas the smaller ACKS from the PC are much less likely to be lost, and
    • The downlink speed FROM the router will also be negotiated at a higher rate then the uplink.
    And if the client really can't communicate well with the router, you can always use a higher power USB adapter, as mentioned above. The choice is entirely up to you! There are many such adapters which run 200mW which makes the link broadly reciprocal.

    NB

    I've had a lot of PM's from people who disagree with what I've said here. Now, let's just point out a few things, OK?

    This stuff isn't something that is up for discussion or argument, as it is very BASIC communication theory. For anyone who doesn't like it, then perhaps moving to a different universe where different laws of physics apply might be a good idea. Now, I am not going to apologize here for upsetting anyone. I am an RF and communications engineer and I do know what I am talking about. The forums are full of complete nonsense and it was high time someone put it right by explaining why many of the commonly held beliefs are utter garbage. To those people who argue in the face of all the facts, I would ask you to either go to University and learn the theory and practice by yourself, then go work in industry for at least 20 years, or perhaps believe someone who knows just a tad more than you do about the subject, OK? And please stop spreading misinformation in the forums when you are talking out of your backsides.

    I agree totally that there is no justification for increasing power IF it is not necessary. However, for many people with insufficient signal at the receiver, it can mean the difference between a relatively poor, lowspeed service, and ultimate happiness :D

    But, it should really only be done when there is a good reason to do so. If you live in a multi-floor building made from steel reinforced concrete, for example, you will find it very useful indeed. For most people it isn't at all necessary, and may even result in a decrease in your download speed because of overload of one or both receivers, you may be too close to the router to begin with. More to follow on this in another post.

    I hope this helps some people, because trying to make a technical subject understandable by others who don't work in that field, is not easy. Check elsewhere for confirmation by all means, but be very careful before accepting anything you read on the forums as gospel. Most forum posts are by laymen who are not experts in the field. [I'm trying to be kinda tactful here - my colleague wants to change "laymen" to "idiots"].

    Let me just point out something - perhaps 95% of all of the information on the internet is from idiots who have no idea what they are talking about and aren't interested in learning, which is much worse.

    Search through communications handbooks, research papers, specifications, papers, from sources such as IEEE, Cisco or the other manufacturers, professional bodies, universities and research establishments - these are the best sources of information. In particular, avoid magazine articles from professional journalists ... and just remember, if you need an operation to remove an appendix you would not ask the local baker for his input, would you?

    Next, I will dispel some more myths by posting some details of the WRT54GL's transmitter, and its power levels and performance.
     
  22. Dent

    Dent Network Guru Member

    Thank you, Toastman, for a very informative post. Look forward to your subsequent posts on this topic.
     
  23. baldrickturnip

    baldrickturnip LI Guru Member

  24. jsmiddleton4

    jsmiddleton4 Network Guru Member

    "but as stated, they are mostly small packets and subject to much less loss."

    While that is true its not totally relevant. It doesn't matter how big or little the client uploading packets are they still have to be received intact. That information being received intact at the router depends on a client's sending signal, not the routers sending signal. You can boost the bejeepers out of the router's send signal and if a client's send signal can't reach the router the size of those packets matters naught.

    Wireless communication between a client and a wireless router is a 2 way process or 2 way negotiating between client and router. Having one side boosted and the other side not doesn't really do much. Its like yelling louder at a deaf person thinking you will improve their hearing by raising your volume.
     
  25. Toastman

    Toastman Super Moderator Staff Member Member

    baldrickturnip - Did you ever use the diversity switch in earnest for this purpose? I once tried to use the two antennas to favor different parts of a site, but it didn't seem to work as I expected. I believe once the router is receiving a satisfactory signal from a client, it doesn't appear to switch to the other antenna quickly enough to be useful. Actually, removing the antennas individually seemed to show the same one always being selected. I couldn't find any info on how the firmware manages the diversity switching, but I suspect it doesn't work properly. I didn't follow it up, though. If you're using a dish, it may be easy for you to see if this is the case?

    It's worth attaching a scope to the switch to see if and how often it does comparisons, if I get time I'll take a look.

    @jsmiddleton, why is what I have written is not totally relevant?

    Dear oh dear. If you can't understand even basic principles, there really is no hope.

    To use your own analogy, if you shout one word at a partly deaf person, there's a very good chance he will understand you. If you shout a complete paragraph, he is very unlikely to do so. The smaller the amount of information sent at any one time, the greater the chance of it being received. That's very basic stuff. That is why it is not good practice to send very large packets over marginal links.

    Let's expand this a little to make it more understandable for you.

    Suppose you are near a main road and passing trucks every 10 minutes make it impossible to hear your friend. Keeping sentences to no more than a few words each, you can talk for up to ten minutes before a passing truck wipes out a sentence and you have to repeat it. Not a big deal. It was just ONE sentence out of perhaps dozens.

    Now suppose you transfer that conversation in one delivery, one huge sentence. Before you have finished it, a truck passes. The sentence was not received intact and you have to say it again. But dammit, same thing happens again! You may have to repeat it MANY times before your friend gets it. Each time, the truck has ruined your communication.

    Sending data in small bursts = greater reliability. And this is VERY elementary stuff indeed.
     
  26. baldrickturnip

    baldrickturnip LI Guru Member

    Toastman - I have only ever used the parabolic on a single antenna ( unscrewed the other ) to boost as a client - I am not sure how the router controls the antenna it uses or how fast it will switch between them.
     
  27. Toastman

    Toastman Super Moderator Staff Member Member

    Thanks, just wondered. I don't really have a use for the parabolic dish antennas here, but I have a lot of old satellite dishes out back. I stuck an SMC USB adapter at the focal point of one just to see how it performed as a feed. The results were extremely good, there has to be no easier way to make a high gain antenna for wifi links !
     
  28. jsmiddleton4

    jsmiddleton4 Network Guru Member

    "why is what I have written is not totally relevant?"

    I answered that question quite clearly.

    About the only scenario in which boosting the signal in the router may show up in the real world as a positive thing is in a multi router setup as in WDS, bridges, etc. I doubt any of us have clients, like laptop wireless devices, that allow us to boost the client signal.
     
  29. Toastman

    Toastman Super Moderator Staff Member Member

    I don't think that you've answered the question clearly, or at all. Perhaps my explanation is inadequate, and I have failed to get my message across. So let me try once again.

    Routers negotiate link speeds, right? Your laptop has a low tx power, so it negotiates a low speed, let's say 5Mbps sending to the router. That's pretty damned fast and quite adequate for requesting a web page, yes? Lets suppose our downlink was also 5Mbps. Not bad but not very fast, right?

    Now, lets say we increase the router's power to the maximum, and a faster link speed of 54Mbps is negotiated for data sent to the laptop. We now have a 5Mbps uplink and a 54Mbps downlink. We previously has a 5Mbps uplink and a 5Mbps downlink.

    The difference is that our downlink has improved from 5Mbps to 54Mbps.

    Isn't that exactly what we want? We want faster download speeds, don't we? That's exactly what we got by increasing the router's transmit power. What's the problem? Why can't you grasp this very simple fact?

    Perhaps you might point out which bit of elementary communications theory you disagree with, and I will ask the appropriate organizations to rewrite the textbooks for you.

    Oh, by the way. You can easily improve the signal from most laptops dramatically by plugging in a decent USB wireless adapter. There must be very few that don't have a USB port. Many laptops have really awful wireless. Some Apple machines with a metal case are really, really bad.

    I can recommend the TP-Link TL-WN422G which runs 200mW and has a 4dBi external coaxial dipole antenna. Some rooms here that received a "very weak signal" (XP) jumped up to "Good" rating just by plugging this into their laptops. There are now some adapters with even larger antennas!

    EDIT - In 2015 there are many routers and adapters with up to 1 watt power.

    http://www.tp-link.com/products/productDetails.asp?class=wlan&pmodel=TL-WN422G
     
  30. Toastman

    Toastman Super Moderator Staff Member Member

    WRT54GL Power Output

    Is the WRT54GL (and the ASUS RT-N16) really capable of more output ?

    The ASUS RT-N16 uses the same power amplifier chip mentioned in this article and is therefore subject to the same arguments. See this link:

    http://www.linksysinfo.org/forums/showpost.php?p=369064&postcount=3

    Firmware

    The original Linksys firmware is said to have had the transmit power level fixed at 28mW. Since many people wanted to increase the power for various reasons, probably to annoy the doomsday brigade, most third party firmware allows an increase in the level. The aim of most mods is to allow the use of full power on all channels. That doesn't mean you HAVE to use it!

    Tomato firmware, like most others, has a power setting selection up to 251. This is usually taken to mean the transmit power output in milliwatts, but I believe this should sensibly be viewed not as the actual transmit power output level, but only a relative number. Some recent versions allow setting up around 400mW but this is clearly nonsense.

    The actual power output will often be considerably less than the set figure, and is dependent on the wireless driver version in use. What that level actually is, isn’t easy to determine without proper equipment. So far, I haven't found any article in which it has been tested by an accurate method and a result posted which can be taken as authoritative.

    Different versions of the wireless driver result in different output levels, as the country selection and some other parameters can prevent operation at the full power and set internal maximum defaults. In the original Linksys/Tomato driver, an override parameter existed which would allow full power operation regardless of this, but this override no longer functions in the later "ND" drivers. No source code is available for the drivers, so we cannot investigate further.

    Hardware

    This router has a Broadcom BCM2050 wireless chipset driving an external SiGe 2528L power amplifier.

    http://www.sige.com/uploads/briefs/DST-00074_SiGe_SE2528L_brief_Rev_Sep-10-2008.pdf

    The BCM2050's transmitter can provide +5dBm (3.16 mW) output. To increase this power, an external SiGe 2528L three-stage amplifier chip is used. The chip has internal temperature compensation and can also withstand a high level of antenna mismatch. Also, going from a few clues in the support source code for the closed-source broadcom driver blobs, it does appear to have a temperature sensor in it, which could be used to reduce power when the device gets too hot. We don't know if it does so. In common with most modern devices, it is quite rugged. I could not find any figure for heat dissipation, but bearing in mind that the chip has a relatively low duty cycle, it does not get very hot even at the "full" setting of 251. The data sheet does not suggest that a heat sink is necessary. Nor would there be any practical way to attach one due to the chip size and packaging. The tried and tested "finger" method results in no burns, it is possible to leave one's finger on the chip without any pain. [Here I refer readers to consider the many recent laptops in which the graphics chips get so hot that the solder is melted and the chips drop off - silicon can run HOT with no adverse effects].

    According to the data sheets, and in accordance with normal practice, the PA is switched on ONLY when in use by a pulsing an “enable” input. Therefore it can produce no thermal noise when it is not transmitting. This is so that thermal noise generated in these three stages does not increase the noise floor at the input to the router’s own receiver.

    Many people who insist that the router will "produce more noise" if the power is turned up refer to the noise measurement on the router itself. This is not an indicator, because the noise measured by the router obviously cannot come from the router's transmitter - because both the transmitter and it's PA are switched OFF when receiving. Your transmitter cannot cause an increase in noise figure on the router's own receiver - because it's turned OFF! If your router’s noise reading changes, it must therefore be due to something else. Don't listen to nonsense from forum posters who have no idea what they are talking about. [ I DO know - because it's my job to know, and I make no apology for that. ]

    The noise floor of a channel is the amount of noise that the receiver sees on that channel when the router is not transmitting. This noise can come from of many different sources. Noise can of course be created by our transmitter, but that would have no effect on our own receiver - what you should check for would be an increase in noise at other nearby receivers on the same or adjacent channels.

    The Crunch ...

    The 2528L has a maximum power output, according to the data sheet, of 24dBm/251mW (mode b) and 21dBm/125mW (mode g).

    So let us quickly dispel the myth that the router will fail if used at a setting of 251 because:

    The data sheet shows clearly that it is DESIGNED to run at a maximum power level of 251 mW, and that, obviously, is why that figure was originally patched into the firmware.

    Tests

    So let's see what happens in practice when the transmit power is varied, using Victek's mod 1.25 version 8515ND (which uses the 4.158.4.0 driver) and the country set to Singapore.

    I used the normal antennas on the WRT54GL, and a D-Link DWA110 USB adapter at the client. I selected channel 6, with only one other access point noted at a level of -88 dbm +/- 3dB approx. To avoid any misleading results caused by receiver overload, I used a line of sight path to the client at a distance of 20m.

    I decided not to try to measure download speed because this depends on so many other factors, but instead, the error rate (packet loss) of the download. Before beginning the tests, with the client switched off, I recalibrated the firmware's noise floor level, which on this channel was -95 dbm. Signal levels reported by the DWA110 were found to closely follow the readings from another WRT54GL, also calibrated, so these measurements are something everyone can relate to. Bear in mind they are only approximate figures calculated by the firmware from information supplied by the wireless hardware, and that every wireless card is different. It's a great pity, but I don't have any access to professional test equipment. Nevertheless, we will do the best we can using primitive means to at least draw some approximate conclusions.

    I used mode (b) to begin, at a power setting of 10mW. The client associated at 11Mbps at -66dbm and a signal to noise ratio of 29dB. I started download of a large file and noted the level of errors at different settings. I checked settings of 10, 20, 42, 84, 150, and 251. There was no significant difference in the mean error rate at any setting. Noise floor stayed constant at -95dbm, the signal to noise ratio improved steadily until it reached 42dB at a setting of 251 and -53dbm. During these tests, I kept two WRT54GL’s on channels 1 and 11 running 2m from the router, which showed no change in their respective noise floors. I also measured the noise floor on Channel 6 using another router in the same room - there was no increase in noise level. The link connection speed remained at 11Mbps during the tests.

    Mode (g) connection at 20m was initiated at 54Mbps at a power level of 10mW, -70dBm, s/n ratio 25dB - quite a good signal. Using the same steps in power, there was little difference in the error rate at any speed, reaching -58dbm, s/n ratio 37dB. It did in fact improve slightly with settings over 150. The link connection speed remained at 54Mbps during the tests.

    PLEASE NOTE - Placing the client next to the router, at a distance of under 1m resulted in a high number of errors. Watching the error rate while moving the client machine showed the error rate increasing as it approached closer than about 2.5-3m to the router. This indicates receiver overload, and is a probable cause of most of the posts in the forums where people have found increased power to result in lower throughput. So either use lower power or keep your PC a few meters away from the router for best performance.

    *

    It is interesting to note that the reported signal strength on mode (b) was about 4dB lower than in (b) mode. So the router does seem to be adhering to the datasheet which indicates that maximum power should be 125mW in mode (g).

    Conclusion

    These quick tests confirmed that there isn’t usually any problem in running a WRT54GL at any level you choose. You are very unlikely to have any problems. I’ve always kept mine at or under the 150 setting, purely to be on the safe side, and I have never had any problems with overheating, increases in noise floor, or appearances of demons during the night. It is actually quite normal practice in parts of Europe for Tomato users to use 250mW! If you believe you have a situation where increasing transmit power might help, perhaps with the addition of a better wireless adapter at the client end, don't be put off by the merchants of doom. I have some clients who live in "blindspots" (rooms behind the lift shafts, for example) and they benefit immensely from the increased power levels on the router. This has encouraged some of them to try using something like the TP-Link 200mW USB adapter with external 4dBi antenna, which has improved things even more. Long - distance links using the same model router at both ends on higher-gain antennas also benefit from both routers being set to use a higher power - this would make the change reciprocal of course.

    http://www.tp-link.com/products/productDetails.asp?class=wlan&pmodel=TL-WN422G

    Your mileage may vary, but if it does, keep an open mind, don't panic, and look carefully for the reason. There is always a reason. You may not see or understand that reason - that is the purpose of schools and universities, research laboratories, etc - nobody expects you to be an expert. But just don't accept the word of a forum poster who happens to be an airline pilot or a building worker - because it isn't his field and what you're getting is only his uninformed opinion. Which, I have to point out, is usually nonsense.

    Here's an anecdote - yes, it's silly, but you do take my point?


    Again

    Please DO take note that being closer than maybe 2-3m to the router may result in receiver overload and increase the error rate. Even an AP 5 meters away from another does not report correct signal strengths, around 25m is needed. Many of the people on the forum claiming better results by lowering the transmit power, have subsequently mailed me to say that it was because their client was too close to the router, and it all worked fine 30m away!

    Additional teaser

    When setting up a bunch of AP's for a new project, I always test them in my apartment for a week before installation. It so happens that I have just bought the hardware for a new installation. So, just to see what will happen, I put 28 new AP's in my room, on the same channel, all running at a setting of 251. The noise floor understandably suffered a little, -91 instead of a more normal -94/96. However, I am still able to associate with the AP's in this residential block - even the weaker ones 45m away - on the same channel - and send this post :cool:

    QED.

    EDIT - December 2009

    It is now almost two years since I installed over 200 routers here in different blocks, a tropical country with high ambient temperatures, in enclosed plastic boxes with no ventilation, all running 150mW and overclocked to 250MHz. Not a single failure. There are now over 350, I don't have an exact count. My only router failures so far have been 2 bricks due to my own experiments. Mea culpa! Both recovered with JTAG.

    Further Edit - It is now August 2015 and 3 WRT54GL routers have had failures due to the electrolytic capacitors. That is 8 years of running in enclosed boxes, often in full sun, and a general ambient temperature of 31+ degrees C.

    EDIT- August 2010 - A WARNING ABOUT MISINFORMATION

    In the last 6 months we have gone though the usual teething troubles with the ASUS RT-N16 firmware development and this has now become the standard router for new installations. Yet whenever there was an unexplained glitch and things did not seem to be working, instantly the router was "overheating" and the merchants of doom immediately added heatsinks and fans and swore that this cured all ills. After many people actually realized this was nonsense, the blame shifted onto the power supply, which is already capable of supplying far more current than is actually needed. Never mind, again this crap was posted initially on DD-WRT and then spread to all forums as another instant cure. Yet here we are in August, all the problems have been solved, and not a single one has been proven to be caused by overheating or power supply problems. I would ask that readers learn from this to keep an open mind, and not jump to conclusions when there is no evidence whatsoever to support them. Or, to be very blunt about it, there are a lot of uninformed idiots posting misinformation in the forums. Please, guys, the manufacturers are not stupid, they have test labs with millions of dollars of accurate test equipment - so why do you think a 15 year old schoolkid with no test equipment at all knows better than their engineers?
     
  31. jan.n

    jan.n Addicted to LI Member

    Thank you Toastman, good read with interesting information!
     
  32. Toastman

    Toastman Super Moderator Staff Member Member

    What is "signal-to-noise ratio" ?

    This term seems to be misunderstood by a lot of posters, particularly the "doom" merchants. They commonly refer to "signal-to-noise ratio" with a sneer as if it were some unwanted rodent.

    So what is it? What about some of the other terms used in Tomato?

    Let's use an analogy.

    Imagine you are in a room, with the TV on and several people talking. We can refer to all of this background noise as the "noise floor". It is always present, but it's level may wander up and down at any given moment. We can measure it and, thereafter refer to it using a standard of measurement called the Decibel. The same measurement is used for radio power measurements. It is usually abbreviated to dB. The dB is a logarithmic measurement. An increase of 3dB is equal to double, or 2x, the power level. So 6dB would be 4x, 9dB would be 8x, and so on.

    So let us say the background noise (the noise floor) of our room is 50dB. Now someone talks to us, and let us say that the level of his voice is measured as 70dB. This is the wanted "signal". The signal to noise ratio is therefore 70-50=20dB. You would have no problem talking to him at this level of noise, though it may be irritating.

    If the speaker then lowers his voice, to a measured level of 52dB, the signal-to-noise ratio is now 52-50=2dB. We can hardly hear him. We may hear the odd few words, but anything complicated will be lost in the general noise. You can still communicate, but the smaller the amount of information, the more likely it is to be heard. He may have to repeat himself several times to tell you anything complicated.

    So, the signal to noise ratio measurement tells us how likely it is that information will be received on a particular channel.

    By definition, an increase in signal to noise ratio can never make things worse! However, again taking our noisy room as an example, once the speaker begins to speak loudly enough that we hear 100% of what he says, then there is no point at all in his shouting any louder. Beyond a certain point, increasing the signal to noise level does not make things any better - but by definition it can never make things worse.

    In our router, levels are given as e.g. -70dBm. Without going into specifics, just consider these reading as relative, so -70dBm is stronger than -90dBm by 20 dB.

    The router has some built-in routines to measure the "noise floor", or background noise, on the channel in use. Background noise can come from many sources, other routers, phones, microwave cookers, airport radar, baby alarms, security cameras, bluetooth devices, etc. The lowest level of noise the router can calculate is -99, but the level will usually be somewhat higher. Usually, however, it will be around -90dBm or better.

    In tomato, we are given two useful measurements. The "RSSI" figure, which is the received signal strength, and the "Quality" which is the signal-to-noise ratio. This is calculated from RSSI reading minus the noise floor figure (which can be found at the bottom of the "Device List" page).

    Ultimately, the most important figure is really the signal to noise ratio.

    From the above, we can see that doubling the transmit power should result in an improvement in the received signal, at any distance, by approximately 3dB - and an increase in the signal-to-noise ratio also of 3dB. If this does not happen, there are other factors coming into play.

    If, for example, the transmitter was capable of an increase in power level, but at the same time it's PA became nonlinear (similar to distortion in an audio amplifier), it begins to transmit more noise. The noise floor of the channel may be degraded by, say, 1dB, the signal at the distant receiver may still increase by almost 3dB, but because the channel's noise floor has changed by 1dB, the signal-to-noise ratio only increases by 2dB. The distortion produced by the over-driven power amplifier also creates noise in adjacent channels and begins to affect the users of those channels, if they are close enough. The router's hardware simply was not capable of transmitting cleanly at this level, and the power should be lowered until no such adverse effects are seen. However - even in this case, the increase DID provide a 2dB improvement but at the expense of users of other channels, who may experience some interference. If you aren't near to other people or AP's then of course this would be an acceptable tradeoff.

    Now, what would happen if we simply ignored this warning, and increased the drive level even further? The received signal may increase only a small amount, if at all, because the transmitter can't actually provide any more power. However, the NOISE transmitted by the router may increase dramatically as the transmitter's power amplifier is driven into extreme nonlinearity (distortion) - the channel's noise floor will fall further - and the signal to noise ratio may actually DECREASE. It is at this point that the link begins to suffer very badly from dropped packets and errors, and the throughput drops. Interference to other channels may be quite severe.

    I must point out that since the WRT54GL's transmitter is turned off when receiving, it CANNOT change the noise floor as measured on it's own receiver. Any change that you do see, or imagine you see, is quite meaningless. It must have come from somewhere else or be a measurement error. You must use another router or receiver to measure the noise floor, and that router must be a good distance away from the first so that it isn't being opverloaded. What I'm trying to say, tactfully, is that almost all of the stuff you see in the forums is complete nonsense written by people with no knowledge of the subject.

    It can be seen therefore, that the signal to noise ratio is a very useful figure indeed. Using this figure alone, an experienced technician will often be able to make quite accurate predictions of what a link is capable of.
    .
     
  33. Toastman

    Toastman Super Moderator Staff Member Member

    Place marker for follow-on.
     
  34. wasp87

    wasp87 Network Guru Member

    Nice posts Toastman, Thanks

    Can you comment on the CPU overclocking ability of the WRT54GL's?
     
  35. Toastman

    Toastman Super Moderator Staff Member Member

    Overclocking the WRT54GL

    OK, a different subject, but it's related, so deserves an answer.

    One of the favourite subjects that keeps coming up in the forums is that of overclocking the WRT54GL (and the TM). Specifically, what is the maximum safe speed, does it overheat your router, and what is the point anyway?

    Let's start with the last question. If you have so much traffic that your router is struggling, then quite obviously having more speed would help. Such a situation is common these days when many people have more than 20Mbps internet connection.

    So why do I overclock, because I have only quite feeble 5Mbps ADSL links?

    The answer is simple.

    Overclock by 25% and the speed of most things increases by 25%. That is actually quite noticeable. If you do get everything working properly, and as fast as you can possibly make it (this includes mode/router/ethernet cards/switches/PC/Sufficient Memory/Decent Web Browser/Half-open Connection Limits/Good Operating System etc..) then the reward is an almost instant response to the web GUI from Tomato. Plus, faster response from web browsers, more throughput for your P2P, lower latency, better gaming, etc.

    Individually, a 25% increase doesn't seem like so much. Now add up the effects of 25% increases in all of the above - and you will be amazed how crap your internet was before you began. Does downloading a movie in 3 hours as opposed to 4 hours make sense? That's what you can get by overclocking, sometimes more. Pay attention to all of the other things I mentioned? You just might find that movie arrives in 2 hours.

    25% extra speed is normally easy to achieve with the WRT54GL running at 250MHz. At that speed, most GL's are stable (I have only ever had one that wouldn't overclock at 250Mhz). A few users report problems, but the vast majority who write to me report improved results. And yes, it's very noticeable. The GL is way ahead on speed of the ASUS WL500gP v2's which I have also used. Subjectively, the 500 "feels" very SLUGGISH. [The v2 uses a budget "all-in-one" chipset, these are known as SOC (solution on a chip). As an engineer, this translates in most cases to "really cheap and really CRAP solution on a chip". Generally, such "solutions" are made for cost reasons, and rarely do I see this improve the performance].

    The maximum "safe" speed is 250MHz. There is a possibility that YOUR router may not overclock. Very small. But it's up to you to decide whether to do it or not. I now have something over 350 WRT54GL's in operation here at 250MHz with no problems.

    It is possible to go above this but then you really do start to tread on thin ice. Firstly, software modifications are usually necessary to enable any support for higher clock speeds. Then as you approach the true limit (around 275Mhz) the chip may well need extra cooling. There's a danger of bricking the router irrecoverably. In fact 250MHz is the fastest overclock that can be achieved without modification of the table in the cfe. Anything higher than this, will actually show a setting corresponding to what you selected, but the reality is that it doesn't really change. Those interested in clocking higher than 250Mhz, take a look at this link:

    http://www.bitsum.com/openwiking/owbase/ow.asp?WRT54G.

    Does it overheat? No. Let me expand on this. I live in a country where the temperature is always over 30 degrees and often 40+ degrees. Because I have many routers running in remote locations it is important to me that they are stable. I've used these routers under full load over more than 4 years in full sun in enclosed cases. They don't overheat and there's really no need to add a heat sink or fan, though if it makes you feel better and you know what you are doing, of course there's no harm in it. Except that the fan will eventually clog up and stop running, and that WILL cause the CPU to overheat cos it's now got NO cooling at all.

    Incidentally, the specimen I had that would not clock at 250MHz did not do so because of heat. It did not get any hotter than any other WRT. In fact, it still misbehaved at 5 degrees C. It was probably an out of spec processor.

    There have been many posts in which people have claimed that their router must be overheating because the router has "slowed down" and an instant cure has been to add a heatsink when all was restored to normal. Let me just point out right now that overheating microprocessors do NOT slow down gracefully when they overheat. They fail to work properly. That results in:

    • system crashes
    • random lockups
    • reboots
    • memory errors
    • application errors
    • disk problems
    The exception to this is when the manufacturer of the particular microprocessor has deliberately implemented some additional scheme to protect the chip. This is for example done by Intel in their PC processors. They have onboard temperature monitoring and when a threshold is reached (at around 90 degrees C) they reduce the clock speed of the processor.

    As the microprocessors used in SOHO routers do NOT have this kind of protection, it is highly unlikely that a router has "slowed down" due to overheating. Be careful what you believe.

    More recently, all sorts of problems with the ASUS RT-N16 firmware have been blamed on "overheating" processors, which has also since been proved to be a red herring.
     
  36. wasp87

    wasp87 Network Guru Member

    250mhz scares me a bit, just seems just a very substantial overclock, coming from someone who builds/overclocks all his rig's.

    So you think most would work fine at 233 correct?
     
  37. fyellin

    fyellin LI Guru Member

    Wonderful set of articles so far. It's great to hear from someone who 1) understands the theory, 2) is willing to test the theory to see whether it holds up in the real world, and 3) can explain it so clearly to the rest of us.
     
  38. Toastman

    Toastman Super Moderator Staff Member Member

    wasp87, don't even bother worrying about it - if you are already resigned to trying 233, just set 250. You are very unlikely to have any trouble. As I said, I have only ever seen one that wouldn't overclock, but trying did no harm. This is a measly 25% overclock after all.

    I also overclock my Core 2 Duo E4700 at 4GHz, by the way. 100% stable for weeks running Prime95, and not one additional water tower or liquid nitrogen tank anywhere in sight, no whirling LED fans, strobe lights, whistles, or pipes dripping water. My quad core i7 is also running at 4.1 GHz with just somewhat higher speed fan (3000 rpm when under full load) than most people would use when it is required to do so. In normal use the fan doesn't operate at that speed, 1200 rpm it isn't even audible. My point - don't use the sh***y fans that come with aftermarket coolers, they're quiet but they are also completely useless. Use proper fans from DELTA.

    ***EDIT*** Quite off - topic, but the new E3200 Celeron chip (actually a full 2 core DUO at 2.4GHz) easily overclocks at around 4GHz with no more than an increase in clock speed and no extra cooling. It's damned fast for a cheap processor !). Now, why can't we have something like this in a router? ASUS ??

    ***EDIT 2015*** There is now an entry level, cheap, dual core processor that can outperform many of the latest chips! It can also be overclock to beyond 4GHz easily. It is the G3258.

    You may have a bit of fun reading through this thread too.... http://www.linksysinfo.org/forums/showthread.php?t=62149
     
  39. wasp87

    wasp87 Network Guru Member

    I'm running my Q6600 @ 3ghz atm since its a 24/7 computer and needs to be pretty reliable. Got a few E8200's running at 3.2-3.4.


    After doing some speed tests, 216 was the fastest results out of 200/216/233/250.
     
  40. Toastman

    Toastman Super Moderator Staff Member Member

    I've been running PC's 24/7 ever since IBM brought out the original PC / AT. I hate to think of what my electricity bills have added up to over the years.

    Re. your speeds, something wrong with your speed tests, I believe. That isn't normal, if it *runs* at 250 - then by definition - it IS faster. Just after startup, check the log for the "bogomips" figure. That will confirm the overclock.
     
  41. Toastman

    Toastman Super Moderator Staff Member Member

    WiFi performance graphs

    Here are the best throughput rates normally achievable with typical Wifi equipment using different connection speeds and modulation type. The conclusion to be drawn from this chart, is that it pays to set your router to (g) only if you need fast WLAN speed!

    [​IMG]

    The next table shows the probable best range and signal strength requirement for unobstructed line-of-sight path, that may be expected from a Wifi installation (source - Intel). The figures are a bit on the optimistic side but are the ones most often quoted whenever Wifi is discussed. You can see that mode (b) is capable of much greater range at lower speeds, the range at 1Mbps is quite remarkable. A 14-20dB stronger signal is required for fullspeed operation at 54Mbps (g) mode.

    I have marked the 80211(b) speeds in red.

    [​IMG]

    There is another table commonly circulated on the internet which is even more optimistic, showing a threshold of -92/94 dBm at 1Mbps and -71/72 at 54Mbps, but I don't believe those figures can be achieved in practice, and certainly not with the poor sensitivity of the receivers used in SOHO routers:

    1 Mbps: -92 dBm
    2 Mbps: -91 dBm
    5.5 Mbps: -90 dBm
    9 Mbps: -88 dBm
    12 Mbps: -87 dBm
    18 Mbps: -86 dBm
    24 Mbps: -83 dBm
    36 Mbps: -80 dBm
    48 Mbps: -74 dBm
    54 Mbps: -72 dBm

    Next is a chart showing typical packet loss at various signal-to-noise ratios, on a 2Mbps “b” link. This mode was used to best illustrate loss, because “b” mode uses no forward error correction and hence packet loss is not masked by the firmware. You can see that a level of approx. 15dB SNR was necessary to initiate a connection, this corresponds pretty well with the last table assuming a low noise floor. Error rate dropped rapidly with increasing strength until at around 20dB SNR the error rate was almost zero. Some random scatter data above the baseline was actually due to co-channel interference.

    [​IMG]

    This scatter chart shows the signal strengths of a large selection of typical clients over time, versus distance from the router. From this you can see the range will extend significantly if you can increase gain by, say, 10dB.

    [​IMG]

    And this one shows the difference in PER (error rate) with small packet size of 10 bytes against that of 1000 bytes. This uses 8011b at 1Mbps, and is of particular interested for those attempting very long distance links. Down in the weak-signal zone between 3 and 8dB SNR, you can clearly see the error rate for small packets is much lower than that of the larger packets. This phenomenon is what allows the use of asymmetric power levels, the signal from the client to the router (consisting mostly of smaller packets and ACKS) is less likely to suffer from dropped packets as the main data stream from the router TO the client.

    [​IMG]

    Conclusion

    When setting up a wifi installation, you should aim for a signal-to-noise ratio of >20dB. This will allow operation at the full 54Mbps connection speed with some margin to spare. A further safety margin above this is desirable to allow for laptops being moved while in use, etc. and 25db SNR is my own personal "minimum".

    Unless you positively have to support some older (b) clients, turn off support for (b) mode.

    If you are trying to set up a large site with few AP's, or operate over long distances with marginal signals, setting (b) only may give you better throughput. Consider also enabling CTS protection mode.

    NEGOTIATED SPEEDS

    Actually, however, the router does not have a fixed speed in both directions. It negotiates a link speed with the local computer in both directions - if a weak signal is available, a lower link speed will be set up. A strong signal will support higher speeds. The transmission method and even a change from G to B speeds will be done, whatever is necessary to set up a reliable link at the new speed.

    Let us consider a PC to Router link which is fairly poor, and the link has settled at negotiated speeds of 1 Mbps UP (to the router) and 1.5Mbps DOWN (from the router). The 1Mbps uplink speed is the lowest available - it is a B speed. Notice anything? Most ADSL uplinks are 1Mbps - so actually, this isn't really a big problem anyway! As the signal gets even weaker, packets will begin to be dropped and the speed might fall to say 100Kbps ... but so what? It is still adequate for most browsing purposes.

    Now let us increase our transmit power from 42 to 150mW.

    Our new negotiated speeds for upload remains the same at 1Mbps, it may even be as low as our hypothetical 100Kbps, but the download improves to 14 Mbps DOWN. So our downloads have increased by 10 and our Uploads (mostly small ACKS) have remained exactly the same. The difference seen by the user, who is mostly concerned with his download speeds, is HUGE.

    The fact that we have not improved our upload performance is quite irrelevant - because we are only sending a small amount of data in that direction. We don't particularly NEED to improve it.

    This big improvement in download speeds is most easily seen by using a firmware which has simultaneous display of UP/DOWN wireless connection speeds to the clients. Such as Tomato's "Devices" page. There you will see the rates continually changing to adapt to varying conditions.

    Once you have seen this, you will realise that most of the posts on this subject maintaining that increasing transmit power is pointless, because the client isn't also increasing its power, are complete rubbish.
     
  42. wasp87

    wasp87 Network Guru Member

    Right when I went to 250 from 216, my wired internet speed cut in half both times I tried it. Page loading was noticeably more laggy, but the GUI seemed a little faster. I think if it were unstable, it would create a symptom like this. Otherwise I don't understand why it's happening either.
     
  43. Toastman

    Toastman Super Moderator Staff Member Member

    Here, the GUI is always almost instantaneous. Web access fast. Download speed up by c. 20%. Any '54GL I pick is the same. Strange !
     
  44. wasp87

    wasp87 Network Guru Member

    Also do you know if a restart is required for truly changing the CPU clock / TX Power?

    I always have anyways to be sure, but I was wondering.
     
  45. Toastman

    Toastman Super Moderator Staff Member Member

    Tx Power changes immediately, clock change requires a reboot.
     
  46. TVTV

    TVTV LI Guru Member

    All of the instructions i've come across said a reboot is required in order for the CPU clock to be "refreshed".
     
  47. wasp87

    wasp87 Network Guru Member

    I've heard this aswell, even though the "Status" will update with your updated clock immediately.
     
  48. Toastman

    Toastman Super Moderator Staff Member Member

    It would be logical, yes, you're right about the clock. The txpower changes are immediate though.
     
  49. jan.n

    jan.n Addicted to LI Member

    Just to be sure, I'll measure this evening if changes in the TX-power setting have immediate effect or need a reboot to function and I'll also measure SNR and signal strength at various settings.

    And I'll be using a Fluke EtherScope Series II, so no rants about the choice of inadequate tools :wink:
     
  50. baldrickturnip

    baldrickturnip LI Guru Member

    thanks Toastman and Jan

    tests and experiments to quantify what is really happening are better than just theory to enable us to make decisions.

    can you put a thermocouple or rtd on the main chip and look at the temp change when overclocking ?
     
  51. jan.n

    jan.n Addicted to LI Member

    I only have a single WRT54GL and rather won't open it to put the thermocouple on the cpu. If someone in Germany or close-by countries can send me a device I'll gladly test overclocking vs cpu temp...
     
  52. darthboy

    darthboy LI Guru Member

    Hi, is it possible that if a client is too close to the AP, there might be issues if the AP txpower is too high?

    thanks for your help.
     
  53. Trademark

    Trademark Network Guru Member

    For the CPU clock, a reboot is required, even though the GUI will reflect the change.

    If you run a cat /proc/cpuinfo command, you will see the change has not taken effect until after a reboot.

    I believe a TX power change is immediate though, with no reboot required. Does anyone know a command to verify this?
     
  54. zeteticApparat

    zeteticApparat Addicted to LI Member

    I don't see anything that would confirm or deny this?
    BogoMIPS are always and only calculated at boot (and are a poor indication of speed anyway).
     
  55. rhester72

    rhester72 Network Guru Member

    If I remember correctly from my OpenWRT days, the CPU speed is actually set by the bootloader, not the kernel, and the NVRAM setting is actually read by the former. Thus, a reboot is required.

    Rodney
     
  56. Toastman

    Toastman Super Moderator Staff Member Member

    Rodney, thanks for pointing that out. Yes, the bootloader reads NVRAM to set the speed. Therefore it is also possible that a bad NVRAM setting could well and truly brick the router.

    Interesting.

    Re the transmit power adjustment, no need to get all technical - you can see the change immediately on any client !

    darthboy - Yes. If you overload the receiver, then it won't work properly.

    However, it wouldn't be very sensible to have a client really close to an AP anyway - you might as well use an ethernet cable for a proper connection and some decent speed :D
     
  57. fyellin

    fyellin LI Guru Member

    This is probably true for a desktop. Not so true for a laptop, where you have to go through the hassle of finding the cable, turning off the wireless, etc., etc.

    In fact, the only time I "tether" my laptop is when I'm using it to upgrade a router.

    On the CPU speed question. Are there any simple speed tests that I can run on the router to confirm that it is, indeed, running faster? I'd prefer not to rely on the subjective "This feels zippier."
     
  58. rhester72

    rhester72 Network Guru Member

    The bogomips rating in /proc/cpuinfo is probably one of the more reliable indicators.

    Rodney
     
  59. Toastman

    Toastman Super Moderator Staff Member Member

    Frank, Yes, I see. If you are using a wireless router in the same room, it would be sensible to limit the power to something low or make sure the router isn't close to where clients may be, from what I've seen. Of course, we're not talking about the end of the world, just dropped packets, nothing to really worry about for most people.

    My bogomips increase from 199.47 to 249.03. As I said before, if the router will run at this speed, by definition it is faster. My experience is that things run faster at 250, but wasp87 reports that for him, that 216 gives better throughput. Why, I can't imagine.... anyone?

    I might be convinced that some processes might not get any faster, but I really can't see why anything should slow down!

    **** I have over 350 WRT54GL's running at 250MHz and also 150mW transmit setting. No failures at all in 4 years.
     
  60. wasp87

    wasp87 Network Guru Member

    Yea 250 has definitely been very questionable for my router. I have just switched to the newest pre-sp2 DD-WRT and will try 250mhz soon again.

    250mhz is the highest option available in DD-WRT. I wish there was a prime95 made to run on routers to test stability lol.

    I noticed I have the BCM5352 rev 0 CPU. Are any of the high clocking 250mhz GL's a revision other than 0? (I understand most should be 0)
     
  61. jan.n

    jan.n Addicted to LI Member

    measuring power vs noise - results

    Hi all,

    there has been quite some discussion here about how increasing transmit power affects the noise.
    Let's not speculate anymore, here are my findings using professional measurement equipment (EtherScope II)

    I measured on channel #4 while #3 and #5 are both not used (so there should be little or no xtalk).
    I sat about 3m away from the AP and had a direct line of sight. My AP uses the std antennas (IMHO 2dB gain).
    I measured each tx-power setting for 5 minutes and show you the max readings.

    [​IMG]

    Conclusion
    1) No reboot required. Setting a higher TX power via the web interface has direct effect.
    2) More TX power does not mean more noise.
     
  62. Toastman

    Toastman Super Moderator Staff Member Member

    Jan.n,

    Thank you for taking the trouble to do this test and post the result. It would be useful to know what Tomato version / wireless driver you used.

    If you have the time, may I ask a further favour? I'd love to determine what the peak transmit power actually is at different levels, and especially the maximum? (But it may not be possible with the Fluke, as I recall, it can't measure RF power, only optical).

    Thanks again!

    EDIT:

    The test from the fluke, which is "presumed" to be accurate, shows that the power levels indicated by Tomato are meaningless. There was approximatelyt 20dB difference between weakest and strongest settings above. Assuming a max tx power of 251mW, this would indicate the lowest setting was just a few mW, not 28mW.

    Personally, I do not believe the measurements. One would expect some differences, but these readings are plain wrong.
     
  63. wasp87

    wasp87 Network Guru Member

    Back at 250mhz on Build 12533 dd-wrt, 150mW TX. So far it's running good, but I also had 250mhz running ok for a bit before, then saw problems.

    EDIT: Went back down to 216mhz, speed decrease with 250mhz was very apparent.
     
  64. Toastman

    Toastman Super Moderator Staff Member Member

    Perhaps it is your particular WRT54GL. A bit disappointing, no ? The only one I ever had that didn't overclock to 250 worked at 240, but I wasn't keen on using it. The problem was not due to overheating - no amount of cooling would make it work at 250MHz.

    TVTV did some download tests and posted these figures, this sort of increase is what happens on all of my GL's.

    (see http://www.linksysinfo.org/forums/showpost.php?p=332214&postcount=26)
     
  65. jan.n

    jan.n Addicted to LI Member

    I'd also like to measure the real ERP, but as you say, it's not possible with the Fluke.
     
  66. SgtPepperKSU

    SgtPepperKSU Network Guru Member

    Could you try the experiment again while transferring a large file (saturating the wireless link)? If the results are the same (increased power != increased noise), I think we can finally put this to rest.
     
  67. Victek

    Victek Network Guru Member

    Appreciate the effort to light this subject, this is not the right way to measure-post the RF power. One important figure is missed, the rate. Figures with 1.25.8515.2 ND.

    At TX Power= 42 . Distance. 5 meter. Noise level. -95dBm

    54M: -68dBm@10% PER
    11M: -85dBm@8% PER
    6M: -88dBm@10% PER
    1M: -90dBm@8% PER

    So the WRT54GL is the most powerful router in the world.. but at one rate 1Mbps... ;)

    So, it's very important to fix the rate in all the measurements. If not we are confusing people. Equipment used: Azimuth's ACE 400NB Channel Emulator.
     
  68. jan.n

    jan.n Addicted to LI Member

    IMHO the readings at different rates are not comparable, as the modulation type changes with the rates. At 54 it's 64-QAM, at 18 it's QPSK and at 6 it's BPSK.

    I'm no RF professional, but there are lots of variables to consider:
    g: Antenna gain [dB]
    ERP: effective radiated power [W]
    d: distance from antenna [m]
    e: strength of the electric field[V/m]

    P = Power-input from amp into antenna
    g = antenna gain
    ERP = P * g
    We know that the standard antennas have a gain of approx. 2dB, but we don't reliably know what P is, so we have to measure it.

    Please forgive me my question if it's silly but:
    Would'nt we need a test-receiver and spectrum analyzer if we want really find out?

    But I really have to tell you one thing: We are reaching a level of discussion where I can't contribute anymore. You know, I'm in IT-controlling and even though I consider myself quite computer-literate I have to give up here :wink:

    All I want to know is: What's the ERP at different tx-power settings?
     
  69. Victek

    Victek Network Guru Member

    Jan.n , unfortunately yes, the forum is full of practical examples and measurement made by many contributor that built a Babel Tower cause the methodology followed was understood at the practical level and skills of each one. So, I appreciate your sentence 'you want to show a final figure', but again one variable was missed, "rate".

    ERP at what rate? I can do the same table you saw at 42, 100, 125, 150, 250 and will show the real power of the router, but for sure ... above certain figure you can't mantain a higher rate due to the noise generated in the RF stage. We'll do this weekend if a have some gap. As always, power without control is equal to uncertain results.
     
  70. Toastman

    Toastman Super Moderator Staff Member Member

    I have read of many tests performed by people in the forums. They've always been highly confusing and often contradictory when trying to measure download rates against speeds, modes, power. The problem is that we have very little information on what is actually going on in the wireless drivers. And that goes for both ends of the link.

    Wireless drivers are allowed to vary transmit power according to mode and link speed. Whose cards/routers properly implement this, and whose not? Then there's regdomain and countrycode, is it properly implemented, and is it working? And then there is the matter of DTPC, which can allow a card to change it's power levels according to what country it is in. Who implements this, who doesn't, was it switched on or off when the measurements were made? There's the power saving modes in the router, two of them, which may be affecting your link in ways you don't know about. The same thing in the device you are measuring it with, if it's not professional equipment but something based on a wifi chip. The antenna diversity switching - is the same antenna in use throughout the test?

    We really don't know most of these things. It means that even two people doing the same tests with the same card may not even be operating under the same conditions. Or even the same test performed at different times!

    Perhaps the best that we can really do is to perform our simple tests under conditions where the connection mode and speed and as many other variables as possible remain the same throughout the test. And let it rest at that. The result isn't perfect but it's adequate to draw some conclusions. And - to make a very important point - the "feel" of how any equipment functions is not to be dismissed. It's probably worth more than many people give it credit. Throughout my career I never draw a conclusion until I have used a piece of equipment for a prolonged period, got to know it, how it performs, how it responds to change, in short how it "feels". Most good engineers will do the same.

    So I consider that this quick test is nevertheless very useful in that it established:

    a) That a setting of 251 (whatever it actuall means) does give a useful increase in power output.

    b) That it did not increase the noise floor.

    c) That the relative levels were given by test equipment that did not solely depend on the half-baked measurement technology such as RSSI used by wifi cards (though better than nothing).

    As for throughput, people will have to make up their own minds after trying it. For an absolute power level from the antenna connector, we will have to wait until one of us gets our sticky paws on a spectrum analyzer - (and I don't mean a modified USB adapter!) :biggrin:
    .
     
  71. Toastman

    Toastman Super Moderator Staff Member Member

    wasp87, all my WRT54GL's have been 5352 revision 0

     
  72. darthboy

    darthboy LI Guru Member

    Oh, my laptop (Atheros 5005G) on the study table, the Buffalo AP on the cabinet next to the table. Laptop kept locking up when AP txpower set to 70mw. Turned it down to 28 and all is fine...

    Since problem has been fixed, no point running a cable or re-arranging the furniture in my room.
     
  73. wasp87

    wasp87 Network Guru Member

    So does anyone know if using the max 251mW is damaging or not?
     
  74. Toastman

    Toastman Super Moderator Staff Member Member

    Tsk .... that was the whole point of the entire thread! You have to make up your own mind, but all the available evidence shows that nothing will be damaged.

    The problem with forums are they are full of posts like this:

    "I increased my power from 42 to 48mW and the router got fried" ... (yes, I've seen many just like it).

    You have to decide whether the poster was (a) serious (b) drunk (d) on drugs (e) just plain stupid (f) trolling or (g) it was all a coincidence and really due to a nearby lightning strike or a power surge, that he somehow failed to notice.

    In a world where hundreds of thousands of WRT54GL's, perhaps even several millions, have been sold, it is inevitable that some will fail in operation. Just one post from someone whose router just *happened* to be running 150 or 251 at the time it gave up the ghost, probably started all this absolute crap off in the first place.

    Just remember that the power amplifier used in the WRT54GL was DESIGNED to run at 251mW.
     
  75. jan.n

    jan.n Addicted to LI Member

    So let's start a new thread and put together a test case.
    First we define a scenario (by which I understand the test environment, i.e. antennas used, distance from AP,...),
    then we define what to test.
    Thirdly we agree on how to test...

    Not necessarily in that order, that is.
     
  76. Toastman

    Toastman Super Moderator Staff Member Member

    jan.n You're very welcome to start a new thread, but as for me, I've really had enough of beating this subject to death, I'm convinced it's a waste of time because most of the forum readers are not technical and seem to believe that physical laws don't apply to them.

    The purpose of the thread was simply to illustrate that it is possible to provide a higher power from the router without damaging the router or causing the noise floor to increase. I am lucky enough to have a very large sample (now several hundred) of different WRT54GL's to test, so results will not be confused by the odd duff one. So I presented the evidence, and now you kindly confirmed the findings.

    Mission accomplished, enough already :eek:

    As far as I am concerned as an engineer, the people who maintain the opposite view have not proven their case. They never had a case to begin with. Or to put it another way, I think most of them are trolling. Or insane. Or both.

    BTW - Here's a guy did another test, same result:

    http://forums.whirlpool.net.au/forum-replies.cfm?t=982038

    I think his chart is worth reproducing here

    [​IMG]

    ***Always bear in mind that the number may not actually represent the true RF power output***
     
  77. wasp87

    wasp87 Network Guru Member

    Toastman - To me it looks like @ 54Mbps, the packet percentage goes up at 251mW, but at 48Mbps,36Mbps, 24Mbps it looks like the packet percentage goes down at 251mW... am I looking at it wrong?

    How is 251mW best if it was worse on everything except 54Mbps..
     
  78. gingernut

    gingernut LI Guru Member

    Means more packets are sent at 54Mbps which is what everbody using wireless wants, speed.
     
  79. Toastman

    Toastman Super Moderator Staff Member Member

    wasp87, the graph shows what amounts to statistically negligible changes at those lower speeds, you have to decide whether the extra gain is worth a few percent change in error rate. Nothing is black and white and these extremely small changes don't prove anything in the short term.

    To repeat, the power increase won't achieve anything at all unless your signal was marginal to begin with, when it may improve things. If you are looking for a straight, simple answer, like "251 setting is better" - well, unfortunately, it can't be given. It is better only under certain conditions, and it may be worse under others. That applies to almost every engineering problem. The only important thing is - does the increase in power result in an improvement is signal to noise ratio - does it increase the reliability and throughput? You can't tell this from the "quality" figure as that is not a true measure, it's just the noise floor minus the RSSI setting. And we don't know how the receiver even measures "noise" ....

    Further, the writer doesn't actually mention what firmware he's using, it's probably DD-WRT, which allows pretty smooth RF output variation right up to the 251 "setting". The ND driver, when used in Tomato, behaves differently for some reason. It is believed that DD-WRT has access to the source code for the wireless drivers, and may have done something we don't know about and can't reproduce.

    Incidentally, DD-WRT v24sp1 appears to have approximately 3dB higher output for the same setting. The old non-ND driver used by original Tomato likewise. It is therefore probable that the settings on Tomato may not even give close to 251mW output. I think the ND drivers achieve only half power (not proven though).
     
  80. wasp87

    wasp87 Network Guru Member

    Does the newest non-ND tomato still have a lower output?
     
  81. Toastman

    Toastman Super Moderator Staff Member Member

    Again, it's a moot point. It depends on how you measure it, at what speed, mode, modulation class, country regulatory mode, and which bits of the 80211 specifications are implemented in wireless cards, routers, and if they work properly together, or even at all, how the user has set them, what other clients using what mode are sharing the channel or are associated. That's why I said, everyone on the forums (and I'm talking of at least 20 different forums I've looked at in many countries) keep coming up with different answers. But anyway, the easy answer is "Yes" but this may vary with the prevailing wind.

    The easy way is to answer the question yourself, for your own particular set of circumstances, by varying the power setting on your router while monitoring the strength on your wireless card - use inSSider as an addon if necessary. As a quick guide, if you do not get about a very noticeable 2-3dB increase when changing from 84 to 251, then the output is being limited by the driver.

    I'll post some more charts here shortly, watch this space....
     
  82. nate123

    nate123 Addicted to LI Member

    Tranmit Power vs. RSSI

    When increasing my router's (WRT54GL, 1.23) transmit power from 55 to 150, the average RSSI of my devices went from -90dBm to -50dBm... is this a good change?

    What is the max Wi-Fi throughput speed of the WRT54GL? mine is set to G-only, and I see ~21mbps down (2.5MB/s). How can I increase this? It is already overclocked to 245MHz (at 250 it got very sluggish)
     
  83. wasp87

    wasp87 Network Guru Member

    Yes -50dBm is much improved.

    You are near the max for your throughput speed, as you're already overclocking, not much else can be done. Test with different overclocks (lower too) to see if it improves. You can also try Victek's tweaked Tomato firmware, 1.25.2. http://victek.is-a-geek.com/tomato.html. Will probably want to use ND version if your router is compatible.

    Toastman - Thanks for the inSSider recommendation, I have a 2 story house, and at the furthest point from the router in the house, I get a consistent -50dBm in inSSider, which I believe I read is the max it will read, so that's good :D. Never drops even slightly from -50dbm. Seems like a far superior program to NetStumbler now.
     
  84. nate123

    nate123 Addicted to LI Member

    I am running Victek's RAF 1.23.8515.5 ND... so I guess that is the limit.
     
  85. Toastman

    Toastman Super Moderator Staff Member Member

    nate123, the improvement you quote is totally impossible, there is something wrong with your measurements. You cannot get a 40dB change in signal strength from such a tiny increase in transmit power. The increase you should see on a typical RSSI indicator would be around +4dB.

    So I think you maybe made a typo with the -99 dBm? :biggrin: That wouldn't have worked to begin with.
     
  86. Toastman

    Toastman Super Moderator Staff Member Member

    Here are some recent measurements on three different setups, tested in Mode G at 54Mbps all under the same conditions. Measurements by a TP-Link TL-WN422G USB adapter using InSSIDer. Please note that I make measurements only at levels under -60dB or so, where the accuracy is not compromised by receiver overload or the nonlinear RSSI firmware.

    1) DD-WRT v24sp1 with ND driver 4.150.10.5
    2) Tomato 1.25 with standard driver 4.130.19.0
    3) Tomato 1.25 RAF with ND driver 4.158.4.0

    [​IMG]

    wasp87, inSSIDer does have a wee problem with recent version, which is why it limits at -50dBm. The authors are working on it, and it may be fixed soon, but the problem is inherent in the RSSI system, and it will probably stay the same. ( see http://www.linksysinfo.org/forums/showpost.php?p=350206&postcount=72 ). If you can find an older version, or are able to set the use of NDIS data in the setup options in recent versions, it would be better.

    But anyway, your signal is very solid, you clearly won't benefit from more power. You may even be overloading one or both receivers!
     
  87. nate123

    nate123 Addicted to LI Member

    You are correct. For me, increasing the power has not yielded a real world result, like higher speeds, My router is at its limit of ~22mbps for wireless transfer
     
  88. Toastman

    Toastman Super Moderator Staff Member Member

    I guess it was at around 50dBm to begin with then, a very strong signal, so it shouldn't help. I would not expect to see any really significant changes once over the threshold (on most cards) of around -70dBm.

    Thanks!
     
  89. wasp87

    wasp87 Network Guru Member

    Played around with version 1.1 of inssider that can detect better than -50dBm


    Using a WMP54G, the best results were yielded with 150mW. Strangely I noticed that I would go from 1 setting to another, and back to the same setting as before, and the signal would be significantly different than before. Doesn't seem 100% consistent. I ended up with the best number being a stable -37dBm @ 150mW at the furthest point in my 2 story house. I will do some more testing later to see what's up.

    EDIT: Seems like somewhere in between 150 and 200 is the best for my setup. 168mW improves from 150mW, but 200mW decreases from 150mW.

    EDIT2: At 168mW I noticed DD-WRT set the Rate to 34Mbps or so, then it went down to 24Mbps I believe. At 150mW, it's a constant 54Mbps. My guess is that DD-WRT determined it can't maintain a stable 54Mbps@168mW so it went down. Even though the rate is slow, the signal strength is still higher (makes sense) Anyone have a comment on this?
     
  90. Toastman

    Toastman Super Moderator Staff Member Member

    Keep watching and testing for a few weeks. You'll notice it change all the time. A change of 18mW is only a fraction of a dB and shouldn't really make any difference to anything. You'll usually see at least 3dB of jitter on the link anyways. Monitor over the long term (days) and see if it is repeatable. And for each reading, take the average of 10. After a while you will begin to see which readings are correct and which for some reason are way off.
     
  91. wasp87

    wasp87 Network Guru Member

    EDIT: Nvm, this whole post is pointless. I figured it out.

    EDIT: Yea, I think most of the info I gave before is incorrect. Those dBm ratings are off. I'll do some more testing and post my correct results.
     
  92. Toastman

    Toastman Super Moderator Staff Member Member

    Why RSSI isn't very useful for signal measurements

    The IEEE 802.11 standard defines a mechanism by which RF energy is to be measured by the circuitry on a wireless NIC. This numeric value is an integer with an allowable range of 0-255 (a 1-byte value) called the Receive Signal Strength Indicator (RSSI). Notice that nothing has been said here about measurement of RF energy in dBm or mW. RSSI is an arbitrary integer value, defined in the 802.11 standard and intended for use, internally, by the microcode on the adapter and by the device driver.

    However, no vendors have actually chosen to actually measure 256 different signal levels, and so each vendor’s 802.11 NIC will have their own specific maximum RSSI value (“RSSI_Max”). For example, Cisco chooses to measure 101 separate values for RF energy, and their RSSI_Max is 100. Symbol uses an RSSI_Max value of only 31. The Atheros chipset uses an RSSI_Max value of 60. Therefore, it can be seen that the RF energy level reported by a particular vendor’s NIC will range between 0 and RSSI_Max.

    Whatever range of actual energy is being measured, it must be divided into the number of integer steps provided by the RSSI range. Therefore, if RSSI changes by 1, it means that the power level changed by some proportion in the measured power range. There are, therefore, two important considerations in understanding RSSI. First, it is necessary to consider what range of energy (the mW or dBm range) that’s actually being measured. Secondly, it must be recognized that all possible energy levels (mW or dBm values) cannot be represented by the integer set of RSSI values.

    Herein lies the reason that making measurements with simple tools like inSSIDer are so difficult. Every wireless card manufacturer, and there are many, can choose his own RSSI-Max level, and this may vary depending on the version of their particular chipset. And so all of us are making comparisons between varying situations with tools that are also varying in effectiveness (I hesitate to use the term "accuracy" here - there is no implied accuracy whatsoever in the RSSI system!).

    Right away we can also see that the lower the RSSI_max value used by a wireless card's manufacturer, the larger the "jitter", or jump by several dB in the signal indicated level as the signal increases slightly and crosses the threshold into the next "step". You've all seen the signal level jumping by three or four dB between readings I am sure.

    In the case of Cisco, the jitter would be small, with Symbol, it would be perhaps three times larger. Just this one artifact alone is enough to nullify the majority of the "tests" conducted by forum members. A very slight change in the path loss on a channel can push a measurement up or down by 3-4 dBm. Depending on what wireless chip (and hence RSSI-MAX) is used, the user may see this as anything between a 3dB and 12dB change! This is obviously absolutely useless as a measurement tool.

    Worse, wireless NIC manufacturers do not measure signal strength so accurately at the upper levels. The main use for the signal level measurement is to determine when signals are so weak as to warrant looking for another AP (roaming) - typically at a 20% reading. The logarithmic nature of the dBm measurement, coupled with the fact that the RSSI range used for measurement contains dBm “gaps” (due to the integer nature of the RSSI value), has led many vendors to map RSSI to dBm using a lookup table. These mapping tables allow for adjustments to accommodate the logarithmic nature of the curve. There may be some quite large steps using this method, especially when the signal is strong, making jitter between steps much worse on some points of the curve.

    So always conduct your measurements on weak-ish signals which may be more likely to give better accuracy - let's say - a distance of 25m is usually a good idea!

    For example, when the signal level is 50%, this is reported with different values of RSSI, depending on the vendor; a Symbol card would convert to an RSSI of 16, because its RSSI_Max =31, Atheros, with RSSI_Max = 60, would convert it to an RSSI of 30, and for Cisco, which is the easiest one because its RSSI_Max = 100, RSSI is 50. InSSIDer would then take these figures and convert to a dBm scale - and by now, we're getting lost. Because InSSIDer does not know or care what wireless you are using !

    You see the problem with trying to make meaningful conclusions with what amounts to a total pile of crap as a basis?

    Unfortunately, we still do not know how all vendors map RSSI to signal strength percentage. This lack of consistency between vendors (and even between their different models) does not allow for direct comparison of performance evaluation results, especially when performed with equipment of different vendors.

    Also, you can see some wireless NIC cards are therefore more suitable than others for RF "measurements" using simple software like inSSIDer, those with a larger "RSSI_Max" level would be best.

    It would be interesting to know the RSSI_Max of the WRT54GL.

    Using, say, inSSIDer with wireless cards by different manufacturers, we see huge differences in indicated dBm levels, as explained above. It is highly doubtful that any commonly used utility like inSSIDer will ever make any attempt to identify wireless cards and adjust the data to provide anything close to an accurate dBm figure. To do this, it would need to know the RSSI value used by the card, however, this information is not usually made available by the manufacturers (even the data for the ones mentioned above has actually changed in newer designs so it is safe to assume we do not know the RSSI value - ever).

    The larger the card's RSSI_Max, the more compressed the dBm scale will be on the graph. The expected change of RSSI from changing a WRT from say 1mW to 150mW would be around 20dB - if the change was accurate and not just a relative figure. You can see that in practice, 10-12dB is all that is ever shown. A measurement accuracy of around 50% is pretty useless. But then, we don't actually know if the 1 and 150mW readings on the router are accurate, or even refer to any real power measurement either, do we ? Recent wireless drivers set to use the wireless driver's defaults even on old routers such as the WRT54GL etc actually show a transmit power of around 1.5 WATTS!! That is ridiculous. They can't run 1.5 watts! They are limited to 250mW by the hardware!

    Utilities are sometimes provided with the purchase of a wireless card, which is actually written by the card manufacturer. They would presumably be in a position to provide some better accuracy, as all pertinent information is known to them (Intel proset software, for example). Whether it actually IS more accurate- if they were even interested in making it so - I have no idea. Remember - this information is completely meaningless to the majority of users, and it was never intended that users would ever see the readings!
     
  93. Dent

    Dent Network Guru Member

    Toastman,

    Due to your previous comments for the tx power for the WRT54GL, I set mine to 150 mW and all is good. Now I have acquired a Buffalo WHR-HP-G54 which has a built-in signal amplifier. When Tomato is installed to it it has the tx power defaulted to 10 mW as opposed to the usual 42 mW default for the WRT54GL. Do you have any idea what would be a good overall boosted tx power to set it to? Do you think setting my WHR-HP-G54 to 150 mW would have any adverse affects to the router due to the signal amplifier? In the DD-WRT forums there is a recommendation to set the WHR-HP-G54 to no higher than 30 mW but for fear of what I do not know and it is not really mentioned as to the why.
     
  94. Toastman

    Toastman Super Moderator Staff Member Member

    I have never even seen a Buffalo (except for the one with 4 legs) so I have no experience with it.

    But - to answer your comment - speaking as an engineer. If the Buffalo HP has a similar wireless to the WRT series, with a 251mW maximum output and it is then used to drive an ADDITIONAL power amplifier, then it is probable that the drive level to that extra amplifier would be too high, and have to be reduced. If it were not reduced, the amplifier would be driven into nonlinearity (and thus cause severe interference to others) or even destroyed.

    The indicated power level would not be the true output of the HP when the power amplifier is active.

    10 is what Buffalo used, some DD-WRT recommendations say 30 can be used. Who would you trust?

    Perhaps someone with knowledge of the Buffalo will confirm this. Be careful with it until you get more information!
     
  95. Gaius

    Gaius Networkin' Nut Member

    Sorry to bump an old thread but I wanted to confirm that Toastman is right about the ND driver power recommendations. The best performance is at 84 db on Tomatousb.org's ND drivers and any more was pointless for me.
     
  96. Toastman

    Toastman Super Moderator Staff Member Member

    I had several emails recently asking about signal strength, and increased transmit power. I can't reply personally to these emails, so I wanted to bring this old thread up again, everything in it is still relevant.

    Experimenting with the different countries on some newer routers, including ARM, still shows Singapore settings to allow the maximum transmit power. On the routers I have tested, USA was always several dB weaker. Albania also seems good.
     
  97. pharma

    pharma Network Guru Member

    I just started using Norway settings (I live in US) with some good results. I agree experimentation with different country settings is the best method to determine what is best. Using a RT-N66U with transmit power set to 0.
     
  98. rickmav3

    rickmav3 Serious Server Member

    Tried to use this as guide:
    http://linksysinfo.org/index.php?threads/r7000-super-weak-wireless.70701/#post-254527

    Still for R7000 I got very weak 2.4GHz signal compared with an N66U with same Country settting on both and 0 max. hardware default on Transmit Power. Clients in the same room had only 30-40% signal quality and very low Internet speed.
     
  99. hw1380

    hw1380 Connected Client Member

    visceralpsyche likes this.
  100. Monk E. Boy

    Monk E. Boy Network Guru Member

    Legally it certainly does depend on the channel, because the lowest channels are much, much lower than the highest channels, so to avoid destroying 5Ghz the same way 2.4Ghz was destroyed they cut the power on the lower bands. Whether or not the hardware (or, more accurately, the software running the hardware) listens to the legal framework, well, that's the question.
     
    Last edited: Aug 20, 2015

Share This Page