iPhone Pancake

| 1 Comment
I was staying at a friends house in Austin a few weeks ago. We pulled out of the driveway to head to the airport for the flight home. Just before we drove off I realized I didn’t have my iPhone. “Hang on, I forgot my phone.” Then I looked up and saw it lying in the driveway. Yikes! Apparently, as we were loading up the car, my iPhone fell under the car (Ford Edge SUV actually....). When we backed out, we drove right over the phone. It was screen up with a rubber case (iPhone condom from inCase - http://www.goincase.com/products/detail/protective-cover-cl59051/2). I retrieved the phone. The screen was not cracked, but the phone was inert.... dead.... “Shit !” I pushed every button. Nothing. I followed the reset process. Nothing. As we drove to the airport, I was trying to wrap my head around the idea of flying back home and sitting around airports all day without the iPhone. I began rationalizing spending $399 for another iPhone. (I doubt Apple will replace a flattened iPhone.) For some reason we both decided that we should try plugging the phone into power at the same time. I connected the iPhone, not expecting much. After a few seconds the iPhone started booting up!!!! The phone started working again. No problems. The phone was on and fully charged before we ran over it. I am not sure why it shut itself off... Maybe there is a panic function that shuts off the phone when it is subjected to extreme stress like this. Before the pancake incident, the main button was a little sticky. Now it works flawlessly. The phone should last for another few months, when the new iPhone 3.0 is available.

Phil's Predictions for the new iPhone

| 1 Comment
WIth Apple's World Wide Developers Conference this week there are many predictions being thrown around. I have no special inside knowledge, but can't resist throwing my guesses into the mix. Yes - Apple will announce a new iPhone at the WWDC this week. It will have improved technical specifications... a camera with better resolution, video capability, 802.11n Wi-Fi, faster processor, bigger memory etc... But the new iPhone won’t be marketed that way... Apple doesn’t allow itself to be put on the “mine is bigger than yours” technology treadmill. The new iPhone will be marketed as something like : “... and now the world’s first video phone in your pocket...” Put the camera on the same side as the main screen, add video capability, add iChat.... We already have Skype for the iPhone. Voila - pocket video phone. The only challenge is the impact on AT&T’s network in the US.... Perhaps it will start out as Wi-Fi only like Skype for the iPhone.... Whatever Apple does, the new iPhone will be sufficiently cool that I will have to get one soon after it comes out. Oh.... one more thing. Steve makes an appearance during the keynote. He may the the “one more thing” of this keynote.

Our testing of WLAN systems at scale dramatically demonstrated the effects of interference on limiting WLAN capacity and performance.   And that opens the question of what other means are available to control WLAN interference and hence increase performance.


We tested enterprise wireless LAN systems based on Cisco infrastructure deployed in two ways:


  • in a conventional microcellular architecture and 
  • deployed using the InnerWireless Horizon Distributed Antenna System (DAS). 

In both cases, the same Cisco access points and wireless LAN controller were used. The main difference is in the physical layer - how the wireless signals are distributed throughout the coverage area.


We discovered that controlling interference through carefully crafting antenna coverage doubled the WLAN capacity using the same APs.


Horizon is a Distributed Antenna System (DAS) that can accommodate a broad range of wireless services operating from 400 MHz to 6 GHz including paging, public safety, 2-way radio, cellular/PCS, WMTS and Wi-Fi.  Horizon implements Wi-Fi, 802.11 a, b, g, or n wireless LANs, as layers of independent wireless service. This layered architecture is known as a layered DAS or L-DAS. The Horizon L-DAS provides pervasive coverage, and layering increases capacity in the same area by using multiple Wi-Fi access points (APs) operating on different channels. 


The layering capability is unique when compared the conventional microcellular architecture for wireless LANs, where a single access point (AP) provides both coverage and capacity in an area. In the L-DAS approach the RF coverage is engineered ahead of time. The access points are located in a wiring closet (or other convenient location with other telecom gear) on each floor and multiple antennas are placed throughout the coverage area such that the wireless signal level for each channel used is spread consistently throughout the entire area.  


In the classic microcellular approach, the access points are placed out in the coverage area and each one has its own antennas. The channel and signal level of each AP are set independently by the Wireless LAN controller and can change dynamically. Usually, the resulting coverage pattern is that adjacent access points operate on different channels. In an L-DAS, all channels are available in all locations. We compared the difference in performance for a system deployed in the conventional manner and a system deployed using InnerWireless Horizon Distributed Antenna System (DAS).


Horizon is made up of multi-coverage diversity antennas that are engineered to provide uniform signal-level RF coverage throughout a facility.  Each coverage area uses a single coax cable with multiple RF radiating elements (antennas) to provide a typical coverage area of 6000 square feet, about three times the size of the coverage area expected from a discrete access point in a microcellular deployment. This coverage area is called a segment. 


An access point is connected to the antenna segment using an access point combiner. Access points for each segment are collocated in a common location (e.g. a telecommunication closet or recessed ceiling cabinet) so that they are secure and easy to maintain. Multiple segments can be combined to provide coverage throughout an entire facility. In this architecture, a single channel of Wi-Fi covers the entire area. If more capacity is needed, more APs can be added to the antenna segment and tuned to a different channel. Each new channel provides an independent layer of wireless LAN service. At any specific location, all channels that are in use are available. 


In the 2.4 GHz band there are 3 unique channels and the DAS distributes signals for each channel throughout the entire coverage area. More channels are available in the 5 GHz band depending on which version of 802.11 is being used. Horizon supports from one to six layers on a single DAS segment for wireless LAN coverage, - three channels of 2.4 GHz and up to three channels in the 5 GHz band.   

A fully loaded L-DAS with three layers operating in the 2.4 GHz band would have three access points per 6000 square foot segment, one for each channel.  A micro cellular deployment supporting the same applications and designed for the same capacity will usually require the same number of APs. 


Our testing configuration used 3 DAS segments with 3APs each for a total of nine APs. The Microcellular 

system we tested used 9 APs to deliver coverage and capacity to the same area. 


Key Findings


  • The L-DAS exhibited lower interference between APs in the same system than the classic microcellular system.   
  • The L-DAS delivered more than double the data capacity of the discrete microcellular system in our tests.   
  • The clients on the L-DAS exhibited more uniform and predictable performance.  
  • The DAS system did not compromise the basic functionality of the Cisco Wireless LAN system. All of the expected features worked well on the DAS system. 
  • Roaming, voice support, QoS mechanisms, 802.11a/b/g and 802.11n with 20 or 40 MHz channels worked on the L-DAS with the same clients and software as the discrete microcellular system. 


The Limits of WLAN Capacity

| 0 Comments | 0 TrackBacks

We had a unique opportunity to test wireless LANs in a vacant, but fully built out office building. Most comparisons of  wireless LAN systems have been done with one access point and 10 to 20 clients in an RF chamber. While that is an  excellent controlled environment for repeatable tests with wireless clients, it doesn't reveal the subtlety of the  complete systems or demonstrate their ability to handle large scale deployments. We wanted to examine the behavior of wireless LAN systems in a more realistic environment - in this case in a 20,000 square foot office space with 10-15 APs and 72 data and 48 voice clients.


We tested wireless LAN systems from the leading enterprise vendors Cisco, Aruba Networks and Meru Networks. All of them are enterprise class wireless LAN systems with integrated security and management tools that are designed to handle very large deployments. All of the systems are 802.11 a/b/g and are Wi-Fi certified. All of them employ a wireless LAN controller that addresses the complexity of managing, securing and deploying these systems. 


There are two different system architectures represented. The Aruba and Cisco systems are examples of the micro- cellular architecture which has been the primary approach for deploying large scale enterprise wireless LANs. The Meru system uses a novel system architecture based on single channel spans and explicit AP coordination on packet transmission.    Aruba and Cisco assert the classic micro-cellular approach to scale - add more APs and decrease the transmission power of all the APs to minimize interference and maximize capacity.   Meru takes a very different approach - remembering that APs interfere at far greater distances than they can communicate and chooses to explicitly control interference.


The complete study examines what happens when we push these systems to their limits. We explore how much data capacity these systems deliver, how many voice calls are possible, and how the systems react with a mix of voice and data. The results are surprising, and illustrate some challenges for the 802.11 MAC protocol and highlight the differences between these two architectural approaches to large wireless LANs. 


The story that emerges from this enterprise wireless LAN scale testing is broader than the difference between products from 802.11 infrastructure vendors. It is really about the 802.11 protocol and how it responds when pushed to the limits in an enterprise environment.    And gives us a preview of what well crafter 802.11n versions of both these type of systems might deliver.


I would like to mention a few key results here.


Co-channel interference is a real factor in enterprise wireless LAN deployments whether they are hand tuned or configured with the vendors' automated RF tools. The interference range of Wi-Fi devices is greater than their useful communication range. There are not enough independent channels in the 2.4 GHz band to allow for deployments with continuous coverage that also have APs on the same channel spaced far enough apart to avoid self interference. This co-channel interference causes packet errors and retransmissions, limiting the overall performance of 802.11 systems under load. 


Cisco and Aruba are classic micro-cellular architecture systems. In a data only test, with 15 APs and 72 wireless clients they delivered less than 50 Mbps of system throughput. However, in a 10 AP configuration, both Cisco and Meru delivered more throughput than with 15 APs. Aruba's throughput increased from 46 Mbps to 64 Mbps - almost 40%. If the APs in the system were perfectly isolated from each other, we would have expected the 15 AP configuration to deliver 50% more throughput than the 10 AP configuration with the constant load in this test. But more APs allow more simultaneous transmissions which caused more interference and lowered performance for these systems. 


The Meru system has a very different architecture and deployment strategy that is designed to deal with these enterprise deployment issues. The recommended Meru deployment starts out with all APs on the same channel. In our test environment, we were able to cover the entire floor with 5 Meru APs operating at high power. On the surface, the Meru approach seems like it would be low capacity since it essentially groups APs together on the same channel and the same collision domain. However this deployment approach allows the Meru WLAN controller to coordinate the airtime access of the APs and (indirectly) wireless clients in the system. To increase capacity in the Meru system, a new set of APs is added in the same area, all tuned to a different channel and still operating at high power. The 15 AP Meru configuration we tested is three independent channel spans with 5 APs each on channels 1, 6 and 11. This approach runs contrary to the micro-cellular deployment exemplified by Cisco and Aruba, which adds more APs at lower power distributed throughout the coverage area in order to increase capacity. 



10 AP Throughput (Mbps)

15 AP Throughput (Mbps)

Aruba

64

47

Cisco

53

49

Meru

61

100


With 10 APs, the Meru, Aruba and Cisco APs delivered approximately the same aggregate capacity.   With the increase in APs to 15  (and the power and cell transmission frequency adjusted for Cisco and Aruba), the Cisco and Aruba capacities substantially decreased, while the Meru system increased its capacity to deliver twice the system throughput of the micro cellular systems in the 15 AP configuration and almost twice the capacity of the 10 AP configuration. 


This dramatic difference was surprising. Co-channel interference is worse that we expected at this scale, and AP coordination is a significant benefit for enterprise WLAN systems. The micro-cellular systems had a very high link level packet retry rate during the testing. (We saw retry rates greater than 40% during some of the tests.) The Meru system with AP coordination had a much lower retry rate. and the result is better system throughput. 


The micro-cellular systems did not scale well. We thought 15 APs would be reasonable for high capacity testing in our environment, but Cisco and Aruba did not perform well with that many APs in this space. They both delivered higher system throughput (and better voice performance) with 10 APs.   


Clearly, adding more APs did not increase the capacity of the micro- cellular systems and there is a limit to the number of APs (and the system capacity) that can be effectively deployed in these systems. The Meru system delivered better performance than the best micro-cellular system in the 15 AP tests. Coordinating access with neighboring APs is a promising area that should be pursued by 802.11 for both increased performance within the same system and better co-existence with other systems in the unlicensed bands. 


Broadband Stimulus

| 0 Comments
There is a portion of the American Recovery And Reinvestment Act that will fund programs to accelerate the deployment and use of broadband in the United States. In particular, the NTIA's Broadband Technology Opportunities Program (BTOP) and the Department of Agriculture's Rural Utility Services (RUS) grants and loans will go to fund programs "in unserved, underserved, and rural areas and to strategic institutions that are likely to create jobs or provide significant public benefit". These programs are handing out billions of dollars in the next 18 months and will likely have thousands of applicants for the much needed stimulus money. How will these agencies do it? It is a daunting task and efforts are already underway. The first open public meeting was held earlier this week and two more will be held in the coming weeks. At this point there are more questions than answers. How are these agencies going to work together? Who is eligible to receive a grant? How will the success of the program be measured overall? What does "underserved" mean? For that matter - what does "broadband" mean? In the new era of transparency and accountability, grant applicants will have to explain how they are going to execute their program, deliver the claimed benefits, and measure the results. For the wireless broadband piece, this brings us right back to the basics. What are the technical requirements to support a given application - coverage, throughput, latency and overall system capacity? In order to support multiple different applications, what are the system level requirements? What performance is required to deliver an acceptable user experience? Perhaps it is time to dust off our Novarum lessons from the first round of Municipal Wireless. We can not afford to repeat those mistakes again.
IEEE 802.11n is the new international standard for wireless Local Area Networks, incorporating new smart antenna technologies (MIMO - Multiple In and Multiple Out) permitting a 5x performance and 2x coverage improvement for WLANs. While this new technology is now becoming the de facto standard in consumer and enterprise networks, it has not yet made an appearance in outdoor, metropolitan scale networks derived from WiFi technology. Many of these same MIMO techniques are being incorporated in both WiMax and in LTE for cellular. Sadly neither is being produced in much volume as yet and fixed WiMax networks do not incorporate MIMO technology. There has been much dispute about whether the specifics of 802.11n designed for indoor networks would apply to outdoor networks and bring the economy of scale of 802.11 to outdoor applications. We (Novarum) decided to test the effects of .11n on outdoor performance. We found dramatic improvements in using indoor 802.11n technology outdoors - so much so that 802.11n has become the recommended baseline for new network deployments. First, let’s review the key pieces of technology incorporated in 802.11n and how it might affect outdoor performance.
Maximal Ratio Combining Receiver combines signal from multiple paths to maximize SNR. We can see a 3-4 dB receiver link budget improvement even to legacy clients. Dramatically improved receiver signal strengths and dramatically reduced packet error rates. More reliable use of higher level encoding methods increasing link performance.
Transmit Beam-forming Modulate phase and amplitude from multiple antennas to create phased antenna array pointing increased performance at the destination node. 7-8 dB gain possible with omnidirectional antennas Decreased inteference, increased capacity, decreased deployment cost. Likely directional antenna performance with omnidirectional antennas substantially increasing network performance and decreasing deployment cost.
Spatial Multiplexing Use the redundant paths created by multipath to increase throughput by transmitting data in parallel paths. Probably not compelling outdoors since high SNR needed for parallel data paths. However, increasing reliability by taking advantage of multipath around deep fades.
Channel Bonding 20 and 40 MHz channels in both 2.4 and 5 GHz bands 20 MHz channels legacy compatible while 40 MHz channels double throughput, mostly useable in the 5 GHz band.
Protocol Improvements Packet aggregation Modest overhead reduction and performance improvement for streaming media and bulk transfers
Cost Reduction Indoor WiFi network demand is for low cost, dual band (2.4 and 5 GHz) simultaneous radios at commodity prices The availability of these dual band 3x3 MIMO chipsets drives the cost of multiradio outdoor units down
In the course of Novarum’s Wireless Broadband Review in 2007 and 2008, we examined over 25 deployed WiFi networks (including all major vendors), 46 deployed 3G cellular networks and 4 fixed pre-WiMax networks. In the case of the WiFi networks, we noted the dramatic effects that client selection had on network performance, coverage and ultimately user satisfaction. We both examined 802.11n clients against the installed multivendor base of 802.11g infrastructure and constructed our own testbed from early outdoor 802.11n components to evaluate the effect of 802.11n when deployed in the infrastructure itself. It is important to recognize that in almost all outdoor WiFi networks, the client access uplink is the weakest link in the communication chain. Legacy 802.11b/b clients experience VERY high packet retry rates of between 100 and 1000% and there are often deep multi-path fades of between 10-30 dB within a few tens of feet. The WiFi protocol is VERY good at masking these effects - instead they are most commonly seen indirectly - by lower throughput and higher delay variance. These effects are seen even for deployments of very high access node density of 50 nodes per square mile or more. These deep fades and very high packet retry rates made mobility difficult, dramatically effect throughput, make packet delay variance so high as to make streaming media difficult and materially decrease the overall capacity of these networks. The improvements that 802.11n provides outdoors astonished us - particularly for a technology that has been disparaged as inappropriate for outdoor deployment. Deploying IEEE 802.11n technology has dramatic effects outdoors - both with legacy systems and even more compellingly with green fields deployment. Let’s summarize the facts of what we found in Novarum’s experiments:
  • 100% throughput improvement of 802.11n WiFi clients with legacy 802.11g outdoor infrastructure;
  • 100% throughput improvement of legacy 802.11g WiFi clients with new 802.11n outdoor infrastructure;
  • 200% throughput improvement of 802.11n client with 802.11n outdoor infrastructure;
  • Similar coverage of 802.11n clients and infrastructure in the 5.4 GHz band as for legacy 802.11g clients and infrastructure in the 2.4 GHz band - making the 5.4 GHz band useful for client access;
  • 25% decrease in access latency and a dramatic improvement in latency variance;
  • a low power 802.11n client has the same throughput and coverage as a high power 802.11g with 10x the power and antenna; and
  • coverage to smartphones at low power and with poor antennas dramatically improves.
These results have a dramatic impact on outdoor wireless networks - bringing the benefits of MIMO technology at consumer price-points. We can expect that 802.11n technology will dramatically improve outdoor WiFi networks.
  • 4-800% increase in system capacity and throughput
    • 2-300% improvement in spectral efficiency through increased link budgets, reduced packet errors, increased modulation rates and improved fading performance
    • effective client access to the 200 MHz of the 5.4 GHz band
  • 802.11n clients dramatically improve legacy 802.11g networks and new 802.11n networks dramatically improve legacy 802.11g clients.
  • Streaming media applications will perform as we expect and will be much easier to deploy.
  • Better backbone designs by reducing the interference of the backbone mesh through beam-forming antennas rather than omnidirectional broadcast.
  • Decreased deployment cost due to decreased node cost. Possibly dramatically.
While not optimally designed for outdoors, 802.11n will SUBSTANTIALLY increase the performance and customer satisfaction of outdoor wireless networks. We can expect the first product announcements of outdoor WiFi networks incorporating 802.11n shortly and can expect that all major vendors of outdoor WiFi equipment to be shipping by the end of 2009. Novarum recommends that all new outdoor WiFi networks use 802.11n products in their infrastructure and strongly recommends 802.11n clients wherever possible.
Two of the important issues in large scale wireless have been:
  1. Can a given technology provide a usable data communications service and
  2. How much does it cost to deploy such a service.
A useful network service provided at an affordable price are necessary preconditions for a successful network offering. Many of the early muni WiFi networks were hampered by the double whammy of both a poor service AND the higher cost to deploy than expected. In seeking to answer this, Novarum structured its’ Wireless Broadband Review to provide some of this information. During 2007 and early 2008, we tested cellular, WiFi and pre-WiMax networks in these cities: Anaheim CA (2x), Brookline MA, Chico CA, Cupertino CA, Daytona FL, Eugene OR, Galt CA, Longmont CO, Madison WI, Minneapolis MN, Mountain View CA (2x), Palo Alto CA, Philadelphia PA (2x), Portland OR (2x), Raleigh NC, Rochelle IL, St. Cloud FL (2x), Santa Clara CA, Sunnyvale CA, and Tempe AZ (2x). In several cities we tested twice to detect changes in traffic and improvements in network service over time and experience. We discovered that, on average, all of these networks have similar performance and coverage, but that the best of the WiFi networks substantially outperformed the best of either the cellular AND pre-WiMax networks. Our test was an apples to apples comparison of performance (delay, uplink throughput, downlink throughput) and availability (percentage of tested locations with service within the advertised service area) for all of the major network technologies:
  1. ATT (Cingular), Sprint and Verizon cellular data networks
  2. A number of metro WiFi networks using equipment by BelAir, SkyPilot, Strix, Tropos, and
  3. Four of ClearWire’s pre-WiMax networks.
We tested outdoor coverage in an average of 20 locations per city - testing all networks with the same traffic load and in the same location and time. One of the important determinants of good performance is a good client modem - and we tested with a variety of client modems. For today’s thoughts, we’ll look at standard USB external modems for each of the cellular data networks, a higher power WiFi modem (noting that current 802.11n modems appear to perform on par with these higher power clients), and a desktop directional CPE for the pre-WiMax ClearWire networks (no portable device was available) at the time. We would expect the WiMax modem (AC powered, directional antenna) to have the advantage in performance. To our surprise, with similar client modems, averaged over good and bad networks, WiFi networks delivered almost 3x better performance than cellular networks and materially better performance than pre-WiMax networks - with similar levels of availability of service over the promised coverage area for all three network technologies.
Network Delay (msec) Uplink (kbps) Downlink (kbps) Availability
Average Cellular 340 195 507 89%
Average pre-WiMax 174 169 1124 83%
Average WiFi 113 767 1286 85%
If we look at the best, and most recently deployed WiFi network, we see performance and availability superior to the best the cellular data networks (by a factor of 3!) AND the best of pre-WiMax networks we measured - by at least a factor of 2.
Network Delay (msec) Uplink (kbps) Downlink (kbps) Availability
Best Cellular 192 612 980 100%
Best pre-WiMax 190 164 1129 100%
Best WiFi 63 2062 2949 100%
The measured performance demonstrate that WiFi networks materially outperform cellular data networks AND pre-WiMax networks - and do it with similar service area coverage. And likely lower deployment costs.

Metro WiFi Does Work

| 0 Comments | 0 TrackBacks
The great lesson we were supposed to have learned from the first generation of metro WiFi networks was - they don’t work and no one wants to use them. Both lessons are false .. though we have many examples of networks that have done either or both. Let’s look at the WiFi networks we examined in the NWBR - which ranged from the some the early networks in the surge of metro WiFi exuberance to some of the later, more mature networks. And for this example, I want to look at them using the results from high power 802.11g clients, that use power levels and antennas better than most laptops (and all PDAs) - though still much less than the power used by cellular data or WiMax modems. Note also that our measurements indicate that new generation of 802.11n WiFi laptop clients perform very similarly to these numbers as well.
Client Delay (msec) Uplink (kbps) Downlink (kbps) Availability
Worst 338 106 337 50%
Best 63 2062 2949 100%
Average 113 767 1286 85%
With similar client modems, averaged over good and bad networks, WiFi networks deliver almost 3x better performance than cellular networks and materially better performance than pre-WiMax networks - with a similar levels of availability of service over the promised coverage area for all three network technologies. Municipal wireless in the unlicensed bands DOES work - at least as well as licensed wireless technologies such as cellular and WiMax.
Client Delay (msec) Uplink (kbps) Downlink (kbps) Availability
Average Cellular 340 195 507 89%
Average pre-WiMax 174 169 1124 83%
Average WiFi 113 767 1286 85%
If we look at the best, and most recently deployed networks - Minneapolis and Toronto - we see performance and availability superior to all the cellular data networks (by a factor of 3!) and pre-WiMax networks we measured - by at least a factor of 2.
Client Delay (msec) Uplink (kbps) Downlink (kbps) Availability
Best Cellular 192 612 980 100%
Best pre-WiMax 190 164 1129 100%
Best WiFi 63 2062 2949 100%
The measured performance suggests that WiFi networks materially outperform cellular data networks AND pre-WiMax networks - and do it with similar service area coverage. In addition, WiFi networks offer the added bonus of offering a lower grade of performance, and coverage area - to the commodity WiFi network clients packaged in laptops and smartphones.

Performance of Cellular Data Networks

| 0 Comments | 0 TrackBacks
We tested performance (delay, uplink throughput, downlink throughput) and availability (percentage of tested locations with service with the advertised service area) for ATT (Cingular), Sprint and Verizon cellular data networks in a number of North American cities during the NWBR. We tested one or more of these networks in these cities: Anaheim CA (2x), Brookline MA, Chico CA, Cupertino CA, Daytona FL, Eugene OR, Galt CA, Longmont CO, Madison WI, Minneapolis MN, Mountain View CA (2x), Palo Alto CA, Philadelphia PA (2x), Portland OR (2x), Raleigh NC, Rochelle IL, St. Cloud FL (2x), Santa Clara CA, Sunnyvale CA, and Tempe AZ (2x). In several cities we tested twice to detect changes in traffic and improvements in network service. Great disparity of service was noted with several small towns (Galt CA and St. Cloud FL) having no 3G service at all (and hence barely averaging 100 kpbs of data service) from any service provider while larger, growing metro areas (Tempe AZ) had an abundance of high performance cellular data providers (with downlink service approaching 1000 kbps). When available, the three major cellular providers offered a similar grade of performance averaging about 200 kbps on the uplink and about 500 kpbs on the downlink. No measurements ever exceeded 1000 kbps.
Network Delay (msec) Uplink (kbps) Downlink (kbps) Total Availability 3G Availability
ATT (Cingular) 318 195 473 75% 59%
Sprint 330 215 559 96% 90%
Verizon 366 179 494 92% 70%
Average 340 195 507 89% 73%
On average, Sprint offered the highest performance with the greatest availability. ATT and Verizon both offered a slightly poorer grade of performance but the availability for these two networks is far more interesting. Cellular networks do not offer a single grade of service ... where available, 3G service is offered but when there is no 3G capacity left, the networks fall back to offering 2G service instead. This fallback results in an almost 3x decrease in upload performance and over a 5x decrease in download. For Sprint, almost all our testing locations offered 3G service and only in 6% of the those locations did the offered service fall back to 2G. For both ATT and Verizon, in about 25% of the locations with service - we could not get 3G service but rather fell back to 2G service. And in the case of ATT, this exacerbated the already poor availablity with only 75% of the tested locations could we get service at all! As we will see when we look at the results for WiFi networks, with the proper client modem selection, WiFi network uniformly outperform and can achieve availability of 85% - not dissimilar to the average availability of 89% for cellular data.

Rural Internetification Project

| 0 Comments | 0 TrackBacks
One of the smaller towns we tested in the NWBR was Rochelle, IL - a farming town of 10,000 people about 75 miles West of Chicago. In this case the ISP providing wireless internet service was the city itself through the Rochelle Municipal Utilities organization - along with power and water. It was an interesting experience going to the small office, right next store to the window where a citizen would pay their electric bill, to shop and sign up for Internet service ... with a variety of recommended CPE devices displayed on the wall. One the key items we argue about in building municipal networks is ownership - who should own and operate the network? And here in Rochelle was a rather good wireless network that was built by the town. And clearly where other options for broadband access offer poor performance or do not exist (The city started by offering dial-up Internet access and the wireless cellular carriers offer spectacularly poor wireless data products - the city’s networks offers roughly 20x the performance of the sadly 2G wireless cellular networks in the town.) This reminded me of the Rural Electrification Project (that had originally sponsored bringing electricity to Rochelle after WWII) - (from Wikipedia): In 1936 the Rural Electrification Act was enacted. Also, the Tennessee Valley Authority is an agency involved in rural electrification. The Rural Electrification Administration (REA), a former agency of the U.S. Department of Agriculture, was charged with administering loan programs for electrification and telephone service in rural areas. The REA was created in 1935 by executive order as an independent federal bureau, authorized by the United States Congress in 1936, and later in 1939, reorganized as a division of the U.S. Dept. of Agriculture. The REA undertook to provide farms with inexpensive electric lighting and power. To implement those goals the administration made long-term, self-liquidating loans to state and local governments, to farmers' cooperatives, and to nonprofit organizations; no loans were made directly to consumers. In 1949 the REA was authorized to make loans for telephone improvements; in 1988, REA was permitted to give interest-free loans for job creation and rural electric systems. By the early 1970s about 98% of all farms in the United States had electric service, a demonstration of REA's success. The administration was abolished in 1994 and its functions assumed by the Rural Utilities Service. The Rural Electrification Administration (REA) was an agency of the United States federal government created on May 11, 1935 through efforts of the administration of President Franklin D. Roosevelt. The REA's task was to promote electrification in rural areas, which in the 1930s rarely were provided with electricity due to the unwillingness of power companies to serve farmsteads. America lagged significantly behind European countries in rural electrification. Private electric utilities argued that the government had no right to compete with or regulate private enterprise, despite many of these utilities having refused to extend their lines to rural areas, claiming lack of profitability. Since private power companies set rural rates four times as high as city rates, this was a self-fulfilling prophecy.[1] Under the REA program there was no direct government competition to private enterprise. Instead, REA made loans available to local electrification cooperatives, which operated lines and distributed electricity. By 1939 the REA served 288,000 households, prompting private business to extend service into the countryside and to lower rates. By the end of the decade, forty percent of rural homes had power, up from around 10% in 1930. From 1949, the REA could also provide assistance to co-operative telephone companies. In ten years, rural electrification increased from 10% of rural homes to 40% by 1940. So why can’t we use this already successful model of the Rural Electrification Project as the basis for government deploying broadband, particularly in less developed areas? While some of the early attempts at municipal wireless were not considered successful (Philadelphia, for example) they had the clear effect of dropping the local cost of wired broadband access. Competition beyond the entrenched monopolies of DSL and cable is a good thing.

Recent Assets

Tag Cloud