Livin’ on the Edge

Livin’ on the Edge

Written by Dr Tony Milbourn, on 1 Apr 2019

This article is from the CW Journal archive.

They say that hemlines go down in times of austerity, and up in periods of affluence. The argument is that a more expensive longer skirt shows a wealthier owner if times are tight. It’s fashion.

Is the same true in our world of computing and communications? Is the emergence of ‘edge’ computing simply the ebb and flow of fashion, or is there more to it? Perhaps it’s ‘religion’: some people like one approach, others prefer an alternative? Not for logical reasons but simply because they do.

There is no need for much of either the client or the server in a client/server architecture, but you do need one or the other. Think about the simplest set-up: a terminal and a processor, with a separation between the two. The keyboard scanning is sent (never mind how for the moment) to a remote CPU and the screen, stored in its memory, is updated and rendered back to the terminal. This is the ultimate anorexic client where the terminal does no processing whatsoever. At the opposite extreme, everything happens in the client and all the processing is done there while the central, remote CPU does nothing other than act as a storage controller. A truly fat client. Real systems usually more nuanced.

A 1960s IBM S/360 mainframe supported thin clients and expected only a teletype machine on the end of the wire. The IBM PC brought computing at the network’s edge. Now applications could run close to the user, not remotely.

Improved communications have opened the door for a changing balance between edge and core processing, with better bandwidth, lower latency and lower cost all influencing the architectural choice. The shift of processing from the centre to the edge has driven change and today we see an interesting mix of edge and core or central server processing.

GET CW JOURNAL ARTICLES STRAIGHT TO YOUR INBOX  Subscribe now

Think about your situation

It’s not always obvious how much processing is taking place at the edge. Since it started Google has naturally tended towards processing in the centre because its applications were initially designed for a pretty simple client, based on the assumption that connectivity would always be available. No connectivity meant no service. This approach means that app providers do not need a complex relationship, either technical or commercial, with the mobile phone manufacturer; and it allows fast smooth and consistent updates by changing only the software in the server. This might be okay if the user is in Silicon Valley with good Wi-Fi and cellular coverage but it’s less helpful for those in more remote locations. Consequently, services like Google Maps have moved to cache more data and do more processing in increasingly powerful smartphones.

Google smart speaker

Ok Google

The Google smart speaker does all the processing in the clould, not even the wake up phrase is triggered in the device rasing privacy concerns

Speech recognition is another classic example of change. In the 1980s, a cellular connection’s low bandwidth meant very limited data could be sent to the core. Initial attempts at speech recognition in a mobile phone during the 1980s did all the processing in the terminal and performance (usually poor) was limited both by the algorithms and the handset’s processor. A couple of decades later the architecture had flipped completely. In order to support compute-intensive algorithms and context recognition they were implemented in the core server and encoded speech data sent from the terminal to the recognition algorithm. There are benefits of this approach: the algorithm can be updated and tweaked very easily and of course information about how subscribers use recognition is captured across the user base.

Interestingly, Apple’s Siri is a hybrid. There is considerable processing power in the iPhone to support a speech recognition system but encoded data is also sent to Apple servers, which run more sophisticated language recognition algorithms and communicate with the iPhone. In some cases, speech recognition is done entirely using the core systems but in others the iPhone algorithms are used. It seems there is enough bandwidth in 4G and ample power in the processor to get the best of both worlds and make today’s speech recognition brilliant in comparison to that of the 1980s.

Ironically, at one time an example of a tricky phrase for a speech recogniser was telling the difference between “recognise speech” and “wreck a nice beach”. If you try the sentence “Oil can wreck a nice beach” on Siri, you can see the two-stage recognition in action when the text is changed from “Oil can recognise speech” to, if you are lucky, “Oil can wreck a nice beach” when the server-side recognition kicks in.

Don’t want to miss a thing

Business considerations also come into play. New business models and products need to scale rapidly, and the balance between OpEx and CapEx is important. Choices made in architecture affect both. If scale-up simply means enrolling a dumb client on the server, it may be very easy and quick to scale the system. This happens all the time in consumer applications: the client is often a smartphone but is treated as a dumb terminal and the speed of the roll out is limited only by how quickly the servers can be provisioned.

Things might look a bit different with a health monitoring service where the end point may be a custom product, with the OpEx associated with the communications hidden in the service charge. A rapid response is essential even if links are down. Most of the intelligence has to be in the end point.

Does a fashionable technology like artificial intelligence (AI) have a role here with its compute-intensive algorithms and its value lying in big data resources? Both attributes mean that the central server takes a stronger role. It is here that the computational power needed can be shared across multiple end points and it is here that the acquired data can be stored, processed and monetised.

Going back to speech recognition as an AI application: if the end point acts autonomously, running recognition algorithms and making decisions without the central server, then the opportunity to grab the context, content and consequence of the speech being recognised is lost. I suspect this accounts for the complex architecture adopted by Siri. Apple is maximising its long-term value by acquiring all this data, while the users, incidentally, are paying to ship the data to the company through their phone contracts.

I think we can conclude that it is not just fashion that determines the level of edge processing in a system, there are business, commercial and technical forces that push in one direction or the other. As communications bandwidths increase and coverage improves (initially with 5G), and AI algorithms become more prevalent, perhaps we’ll see a more complex picture emerging, where hardware supports AI at the edge for fast autonomous decision making, with bulk data shipped to the central server for further analysis. In fact, a bit like Siri does today. 

DO YOU HAVE A VIEW ON THIS SUBJECT, OR RESPONSE YOU'D LIKE TO SHARE?


WRITE FOR THE JOURNAL

Dr Tony Milbourn
- Independent

Tony has 30 years’ experience in the mobile communications industry and a PhD in control theory. Following a career at PA Technology and then as one of the founders of TTP, in 2000 he led the spin-out and flotation of TTP Communications plc, a major licensing business in cellular that was acquired in 2006 by Motorola. He was also a founder of ip.access, the femtocell business, and more recently led the spin-out of a soft modem start-up, Cognovo, from ARM Holdings. Cognovo was acquired by u-blox AG in 2012. u-blox is a $400m Swiss supplier of location and communications modules and chips that is focused on industrial, automotive and professional applications, particularly in the Internet of Things. For 5 years Tony drove the strategic expansion of u-blox and enabled a number of acquisitions that extended the scope and direction of the company. He is interested in creating new opportunities at the point where communications and computing converge.

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Start typing and press enter or the magnifying glass to search

Sign up to our newsletter
Stay in touch with CW

Choosing to join an existing organisation means that you'll need to be approved before your registration is complete. You'll be notified by email when your request has been accepted.

i
Your password must be at least 8 characters long and contain at least 1 uppercase character, 1 lowercase character and at least 1 number.

I would like to subscribe to

Select at least one option*