What did GSM get right?

What did GSM get right?

Written by John Haine, on 18 Jan 2022
GSM is 31 this year. What, GSM is only 31? In thirty-one years cellular systems have evolved through 4 generations and now we are thinking about 6G. Comparing 5G with GSM you might wonder why we were so incompetent that we had to make such radical changes over 31 years. But GSM got many things right, and in some ways the gains made in that second-generation standard have been discarded in the race for the latest thing. It’s worth looking back to “1G” to see why GSM was so radical at the time, and what made it such a success.

When the AMPS1 standard was designed in Bell Labs it built on existing mobile radio technology, as developed and used over decades in systems like emergency services and mobile dispatch. Channels were 30 kHz wide and modulation was analogue FM voice. Two things had to be done to the radio – it had to be able to carry fast digital signalling for control; and since people expect to speak while they listen on the phone it had to be made duplex. The first was fairly easy, using a method of impressing digital data bursts on the carrier that was nearly imperceptible to the listener. For the second, the only feasible approach was to adopt “frequency division duplex” (FDD), with distinct bands for uplink and downlink, so that a channel used for “speaking” from a mobile was paired with a “listening” channel 45 MHz away. The mobile needed a “duplex filter” which was quite a complex device, and at the time rather large and heavy which militated against hand portables. This triggered the development of alternative filter technologies, initially based on ceramics, to allow smaller devices; and then smaller SAW2 and even lumped-element duplex filters emerged. Of course a lot more went into AMPS, such as all the necessary control protocols and the interconnect to the fixed telephone network.  When the UK wanted to start a mobile phone network AMPS was the starting point, though with some changes, notably to the channel spacing to the European 25kHz norm. The result was TACS3, deployed by Cellnet and Vodafone. AMPS and TACS weren’t the only systems – many countries in Europe adopted NMT4, designed initially by the Nordic PTTs5 and telecoms companies (notably Ericsson), but following the same principles. Indeed just before TACS was adopted in the UK, Europe was close to choosing NMT as a continental standard – the UK’s choice impelled Europe to form first the Groupe Speciale Mobile (GSM) within CEPT6, and then to move GSM into ETSI7.

GET CW JOURNAL ARTICLES STRAIGHT TO YOUR INBOX  Subscribe now

Revolution?

Mobile phone revolution

The people developing GSM had a vision of a consumer mobile phone revolution. A strategic document from the Eurodata Foundation at the time posited a system to allow every adult citizen to carry a low-cost pocket phone that would work anywhere in Europe except perhaps in remote mountainous areas. They realised quickly that something based on evolved PMR8 technology wasn’t going to allow the economies of scale that could only come from integrating most of the phone’s functions on to silicon. Nor could analogue modulation support the capacity needs of “every adult citizen” chatting on the phone. European industry and operators swung behind a huge R&D effort to standardise a digital technology. Meanwhile in the USA the industry thought this would all be much too difficult and ploughed on with digitising AMPS to produce DAMPS (as in squib). As they say, “the rest is history”!

Technically speaking

In retrospect, there are a few key technical factors that made GSM a success, to the extent that not many years after its launch in Europe many if not most US operators deployed a version of it as well. Many of these are to do with the way the SIM9 card gave operators confidence they could bill securely, not to mention that it meant that a phone without a SIM could be stocked and shipped without being too much of a target for crime. And of course for the user the SIM was a way to keep his contacts separate from the phone when he wanted to transfer them to a new one. Most of all, continental roaming created a single mass market that could allow the system to achieve the scale economies needed.

But for me the key was the radio. With the old analogue systems, channels were divided up by frequency (as well as duplexing being by frequency). Increasing capacity needed either more spectrum or smaller channel spacing. There wasn’t any more spectrum (then); and reducing channel spacing made the radio harder and more expensive. GSM decided to use digital modulation and wide channels – 200 kHz rather than 25 (or 12.5…)kHz. This meant dealing with nasty doppler multipath distortion which meant some (at the time) crunchy signal processing – but that just needed silicon gates. Channels were made in time rather than frequency – “time division multiple access” – and crucially duplexing was done in time as well. The mobile still received in one band and transmitted in another, but not at the same time. The duplex filter went out of the window and was replaced with a simple RF10 switch. Suddenly most of the tricky RF circuitry of IF11 filters and limiters and narrow-band synthesisers and FM discriminators wasn’t needed any more, and much of the hardware in a mobile terminal could be absorbed into just two integrated circuits, one for RF and one for baseband processing. Nowadays it can be done in a single chip.

GSM launches

GSM was launched in Europe using the same 900 MHz band that TACS used in the UK, and of course became a huge success.  UK operators were quick to phase out TACS, as GSM handsets were much more attractive and cheaper than analogue ones.  The UK government wanted to get more competition into mobile so they invented a new concept – “Personal Communication Networks” (PCN) – and allocated new spectrum for it at 1800MHz, proposing to license two new operators. To begin with they didn’t specify the technology to be used but it quickly became obvious that it should be based on GSM – the radio could be re-banded easily, the baseband wouldn’t be any different, and the standards could easily be extended. The clincher came when the handset manufacturers pointed out that they could make dual-band handsets cheaper than separate 900 and 1800 product lines. The PCN operators adopted GSM, ETSI updated the standard, the same 1800 MHz spectrum was allocated across Europe, and dual-band handsets soon became the norm. Vodafone and Cellnet were given some 1800 MHz spectrum in the UK, and cross-band European roaming became possible.

GET INVOLVED WITH THE CW JOURNAL & OTHER CW ACTIVITIES


BECOME A MEMBER

When GSM was adopted in the US, at 800 and 1900 MHz, it was a simple matter to add these bands to the standard. Within a few years you could buy a quad-band handset that would work all over Europe, the Americas, and much of Asia. All this was easy (-ish) thanks to that simple and elegant radio system using time-division multiple access and frequency/time-division duplex. Just imagine if DAMPS had been adopted, you’d have needed a 4 duplex filters in a quad-band handset making it significantly bigger and more expensive. Ridiculous!

Along comes 3G

Then along came 3G. The industry decided it was bored with the GSM approach and that wideband CDMA12 was the way forward – despite the fact that it needed to use FDD (and so a duplex filter) to work and wasn’t going to be very good for high speed packet data (later leading to major changes to the on-air protocol). (And let’s not mention the IPR13 problems.) Oops!

3G third generation mobile cell phone technology

More and more bands were being allocated to meet the burgeoning demand, so you needed 5, then 6 or more duplexers. The handset radio became complicated and all the filters and switches dissipated energy, making the receiver deafer and transmitter less efficient. Oh, and it all had to support GSM too as 3G coverage was worse. As 4G came along, with features like MIMO14 and channel aggregation, it just got more difficult. 

At least there is a nod to simplification in 5G with greater use of TDD15 bands, but the 5G radio is still a fearsome beast that pushes the limit of what’s possible in a consumer device, and may be a limiting factor in moving ahead to 6G. It’s hard for a handset manufacturer to make a single “world phone”, rather they need to maintain different product lines for different regions.

GSM created our industry. It changed mobile phones from an expensive luxury into a consumer good that everyone wanted. There were many factors that contributed to this, but one above all was the conceptual simplicity of its radio, which allowed most of the phone’s functions to be performed in a silicon chip. 

GSM became quite quickly by far the highest volume consumer product in history up to that time. Maybe there are lessons to be learned for 6G and what lies beyond?

DO YOU HAVE A VIEW ON THIS SUBJECT, OR RESPONSE YOU'D LIKE TO SHARE?


WRITE FOR THE JOURNAL

Footnotes
  1. Advanced Mobile Phone System
  2. Surface Acoustic Wave
  3. Total Access Communication System

  4. Nordic Mobile Telephone

  5. Post, Telegraph and Telephone operator
  6. European Post and Telecommunications Committee

  7. European Telecommunications Standards Institute

  8. Professional Mobile Radio

  9. Subscriber Identity Module

  10.  Radio Frequency

  11.  Intermediate Frequency

  12.  Code Division Multiple Access

  13.  Intellectual Property Rights

  14.  Multiple Input Multiple Output

  15.  Time Division Duplexing

John Haine
Visiting Professor - University of Bristol (Communication Systems & Networks Research Group)

John Haine has spent his career in the electronics and communications industry, working for large corporations and with four Cambridge start-ups. His technical background includes R&D in radio circuitry and microwave circuit theory; and the design of novel radio systems for cordless telephony, mobile data, fixed wireless access and IoT communications. He has led standardisation activities in mobile data and FWA in ETSI, and contributed to WiMax in IEEE. At various times he has been involved in and led fund-raising and M&A activities. In 1999 he joined TTP Communications working on research, technology strategy and M&A; and after the company’s acquisition by Motorola became Director of Technology Strategy in Motorola Mobile Devices. After leaving Motorola he was CTO Enterprise Systems with ip.access, a manufacturer of GSM picocells and 3G femtocells. In early 2010 he joined Cognovo, which was acquired by u-blox AG in 2012. He led u-blox' involvement in 3GPP NB-IoT standardisation and the company's initial development of the first modules for trials and demonstrations. Now retired from u-blox he is a Visiting Professor in Electronic and Electrical Engineering at Bristol University, where he chairs the Centre for Doctoral Training in Communications. He was founder chair and is Board Member Emeritus of the IoT Security Foundation. He served on the CW Board and now chairs the Editorial Board of the CW Journal.  John has a first degree from Birmingham (1971) and a PhD from Leeds (1977) universities, and is a Life Member of the IEEE.

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Start typing and press enter or the magnifying glass to search

Sign up to our newsletter
Stay in touch with CW

Choosing to join an existing organisation means that you'll need to be approved before your registration is complete. You'll be notified by email when your request has been accepted.

i
Your password must be at least 8 characters long and contain at least 1 uppercase character, 1 lowercase character and at least 1 number.

I would like to subscribe to

Select at least one option*