On The Spectrum

This is an article about two very different types of spectra…

The First Spectrum:

Today I am feeling particularly inspired by a teenager who was diagnosed with an autism spectrum disorder. Time Magazine announced today that this young woman will be their “Person of the Year”, the youngest person ever to be honored in this way. I am referring, of course, to Greta Thunberg the young Swedish environmental activist who has done so much to help publicize the climate crisis. Last week I wrote the second section of this article about a different spectrum, and I was about to publish it today when I noticed this announcement. I was not intending to write about the autism spectrum today, but I shamelessly seized upon this coincidence of spectra. Both spectra are very interesting to me.

Greta Thunberg

Greta Tintin Eleonora Ernman Thunberg

If you have not yet seen Greta Thunberg’s truly inspirational talk at TEDx Stockholm in 2018, I encourage you to spend your next 11 minutes doing this. Or, perhaps watch her significantly longer “house on fire” address to the EU Parliament. Or perhaps just spend your next 4 minutes watching this same young woman passionately shaming world leaders for their inaction in her speech at the United Nations at the Opening of the Climate Action Summit, in 2019.

A particularly poignant point that Miss Thunberg mentioned in her TED talk was that her Aspberger Syndrome condition actually gave her a distinct perceptual advantage over us “normal” people. I wholeheartedly agree with her that uncommon focus and near-binary classification can be advantageous in many situations. I loved this quote:

I think in many ways that we autistic are the normal ones, and the rest of the people are pretty strange

Here in the San Francisco Bay Area, in Silicon Valley, I have worked with many brilliant, singularly focussed technologists who (much like Greta Thunberg) see the world in black-and-white, with little tolerance for any shades of gray. Also, many have no interest in talking about anything but their area of technical expertise (but on those topics they will excitedly and bumptiously devour any conversation). I personally embrace opportunities to work with any of these sharp minds, whether or not they are “on the spectrum”.

Personally, I’m neither brilliant, nor singularly focussed (on the contrary I bore easily and I eagerly jump onto every shiny new technology). Nevertheless if I reflect on my own behavior I find I do share some quirks with my colleagues on the spectrum.

I tend to have a great many dogmatic black/white views. For example, I simply cannot comprehend the interest that so many people have in observing professional sports. Someone asked me the other day some question related to the “San Francisco 49-ers” (in an attempt to engage me in a casual conversation after learning I lived near The City). I soon found myself confessing that although I was sure the 49-ers were a local sports team, I had no idea which sport they played, or anything else about them. I know this makes me appear as somewhat antisocial to the majority of Americans who seem to love sports. I cannot however make myself invest any of the precious little time I have on this planet in becoming an expert on such a useless subject area. So I accept being very awkward in social situations where sports are discussed.

Here’s another example… I’m atheist and I believe that religion does far more harm than good. I struggle to say silent when people make what I believe are absurd utterances about God blessing me, or thanking their God for this or that, or telling me that something is beyond human control and in the hands of their God. I just wonder why do smart people believe weird things?. Often religious people try to evangelize their faith to me (and I guess I am equally foolish since I often try to point out to them that science is a far better candle in the dark than any religion). When engaged in these sorts of evangelizing conversations I find it difficult to step back and give any quarter to faith based arguments. That’s antisocial of me too; I know.

Those attributes tend to place me pretty close to folks I know who are on the spectrum. And perhaps I even lazily allow myself to drift farther toward these antisocial behaviors knowing that they are normalized here in the Silicon Valley bubble.

There is one thing about me that seems quite different from the folks I know on the spectrum, and it is this one thing that I use to self-diagnose myself “off the spectrum”: I truly love to talk with people. Anyone. Anywhere. I have a bagful of strategies for (mostly unawkwardly) striking up conversations with complete strangers in elevators, lineups, waiting rooms, airplane seats, or with cash register operators, bus or cab or ride share drivers, waiters, hotel staff, etc. My family members are much more introverted and find it embarrassing when I engage any strangers in conversation. But I find people are so interesting! Each person I meet is so different from the next and they all have something interesting to talk about, and I’m all ears! It is a huge bonus if we have time to really talk in depth about something (anything) observable, scientific, technological, geographical, political, sociological, historical, anthropological, linguistic, psychological, educational, medical, philosophical, etc., etc… or even perhaps fiscal.

But enough about the autism spectrum.

Another Kind of Spectrum:

This other spectrum is a continuum from bespoke infinitely customizable but expensive to maintain and update solutions, to solutions that admittedly place a few restrictions on the problems you can solve but which are either completely maintenance-free or nearly trivial to maintain and update.

This spectrum ranges from custom-hardware-based solutions on the left, to commodity hardware solutions, through virtualized systems of various types, to “serverless” compute approaches, and automated assistants on the right. Let’s take a look at this range of solution choices and consider the strengths and weaknesses of each approach for different applications.

At the red end of the spectrum are solutions involving things like ASICs (Application-Specific Integrated Circuits), and custom-designed PCBs (Printed Circuit Boards). These sorts of solutions can be tailored to very precisely fit any application, but once manufactured, they cannot be modified. When a bug is discovered, or a security vulnerability is exposed, new hardware must be designed and manufactured, and old hardware in the field must be physically replaced in order to modify their operation. Often these solutions can be much more cost-efficient, and much more performant for their particular application than any comparable general-purpose chips, or commodity hardware platforms. For example, the custom hardware may be much more power-efficient.

Moving away from those rigid custom hardware solutions, some custom hardware solutions are more flexible and can be reconfigured repeatedly. EEPROMs (Electronically-Erasable Programmable Read-Only Memory), FPGAs (Field-Programmable Gate Arrays), or their analog cousins, FPAAs (Field-Programmable Analog Arrays), and hybrid digital/analog arrays, all fall into this category. Given appropriate support hardware and software, these solutions can be dynamically re-configured to target changing situations in the application area. Often however, a manufacturer will omit the necessary update circuitry, and simply program these devices once in the factory. In this situation, a new version of the product can be released with new features while using the identical hardware, but machines in the field can not normally be upgraded to have those new features. The older devices can sometimes be returned to the factory to be reprogrammed using specialized hardware and software.

The next step is to use commodity hardware to design your solution. Often this is less expensive due to the costs reductions that come with mass production. The recent explosion of mass-produced microcontroller boards and SBCs (Single-Board Computers) like the Arduinos, Beagles, Raspberry Pi machines, Espressif devices, and many more (mostly ARM-based, currently) has enabled a wide range of applications to be built upon very inexpensive and powerful general-purpose hardware. This site has several other articles about how to build applications using these small general-purpose computers. Of course there are more expensive and more-powerful general-purpose computers too, like those based on Intel and AMD x86 architectures.

These general-purpose computers do most of their application-specific operation in software, using peripheral hardware (e.g., sensors and actuators) attached using general purpose interconnection technologies. Perhaps the most well-known of these technologies today is USB (Universal Serial Bus). Many popular peripheral devices today connect to general purpose computers using USB cables. Another popular interconnect technology is Bluetooth. Some more recent peripheral devices connect using WiFi. Typically, USB, Bluetooth, and WiFi peripherals are actually small computers themselves that communicate with the general purpose computer to provide a peripheral function (e.g., camera, printer, headset, etc.). Connection technologies like these (and many others) have a hardware component (connectors, wires, radio transceivers, and electrical signals with well-known properties) as well as a software component (e.g., a communications protocol to deal with security, retransmission of lost information, etc.).

Historically much simpler devices acted as peripherals, e.g., on factory floors, or in building control systems. They often connected using serial communications, typically over 2 wires, and often using UART (Universal Asynchronous Receiver/Transmitter) chips at each end. An early (1960!) version of this was RS-232 (Recommended Standard 232), and more recently RS-485 was widely used. Over time a plethora of industrial communications “standards” have proliferated, perhaps the most significant of these are SCADA, and OPC/UA.

On a smaller scale, individual sensors (like sonar distance sensors, passive infrared warm body sensors, temperature, humidity, air quality, particle counter sensors and more) tend to use very simple electrical signals and communications protocols. Some still use simple RS-232 serial communications, but most new designs use either SPI (Serial Peripheral Interface) protocol or IIC (Inter-Integrated Circuit) protocol, often called I2C. Most small computers, like Raspberry Pi and Arduino support these protocols making it easy to wire these devices onto GPIO (General Purpose Input/Output) pins on their boards.

These small peripheral devices are usually fixed-purpose hardware devices which cannot be upgraded in the field except by physical replacement.

If you are going to setup a general-purpose computer system for your application, you have the option to buy it, or rent it. IBM, for example, offers “bare metal” servers in its cloud. You can rent them by the hour, or by the month, with a wide variety of customizable configurations (e.g., number of CPU cores, amount of RAM, amount of disk storage, and network bandwidth). Buying is a capital cost and may be cheaper in the long run, but renting is an operational cost that is certainly lower initially. When you rent you also do not need to worry about hardware failures (e.g., a fan or power supply stops working) since your rental provider would take care of that.

For all of the examples covered so far hardware is involved. Power must be connected, and peripherals must be wired (e.g., sensors, or perhaps a keyboard, mouse, etc.). But that’s the easy part. Any solution using general-purpose hardware must also include software to implement the desired application.

Software is potentially upgradable in the field, assuming there is some way to perform the software change. This may include reflashing the soldered flash chips, or modifying the removable storage, or attaching some kind of programming interface, or on the more powerful devices, if they have at least intermittent connectivity, perhaps at least some of their software can be updated over the Internet.

Software for a general-purpose computing system usually consists of firmware, which must be flashed onto soldered chips (this is often called the “BIOS”, “CMOS” or “PROM” on laptop and desktop machines) and then operating system and application software on some form of local storage. The software on local storage typically includes a system kernel (like NT, BSD, or Linux), and the “user space” operating system code (like Windows, MacOS, or your Linux distro: RedHat, debian, ubuntu, etc.) and any application code. Owners of these systems must ensure all of this software is kept up-to-date with the latest security patches — including the firmware, kernel, OS, and all parts of their application code (programming language subsystem, libraries, and the application that sis on top of all of that). Keeping all of this software up-to-date requires high vigilance, monitoring many sources of information to discover when vulnerabilities and exploits are discovered, and when updates are made available to counter them.

As we move to the middle of the spectrum, virtualization support is required in the general purpose hardware. Virtualization enables us to abstract our software configuration away from the hardware, and create a VM (Virtual Machine) solution that is portable to different hardware, easy to replicate (e.g., to run many instances), and easy to stop and restart. You can even move a running VM from one hardware host to another and have it continue as if nothing had happened. Virtualization can also reduce the need for hardware, by running multiple applications (each with their own operating system) on top of a single hardware platform.

To use VMs, you can optionally setup your own hardware, and run one of the popular hardware virtualization solutions on top (e.g., VMWare, Parallels, or VirtualBox). Parallels (which runs only on MacOS hosts) and VMWare are commercial virtualization software packages and each is excellent in terms of performance and functionality compared with VirtualBox. However, VirtualBox is free, and runs on Windows, MacOS and Linux) so if you don’t need the features of those commercial solutions, it is worth considering. I use VirtualBox frequently.

Alternatively, you can rent VMs from many different providers. AWS (Amazon Web Services) is likely the most popular of these VM rental services. AWS was very early into this market and they have created a huge ecosystem of features and partners that may be useful to you. More recently Microsoft Azure, Google Cloud and IBM Cloud have also jumped into this market and each offers competitive services that each differ from each other in services and pricing. The nice thing about renting VMs is that you get all of the advantages of VMs on your own hardware, without any of the hassles of the actual hardware maintenance. Since this field is highly competitive, the prices are very reasonable to rent VMs from any of these services.

Working inside of a VM, you are abstracted away from the hardware, but there are significant performance costs for that virtualization. The severity of the costs depends upon how the virtualization is implemented. There are many techniques for virtualization today, but the most flexible techniques fall into two categories, called type 1 hypervisors and type 2 hypervisors.

Type 1 hypervisors run directly on top of the hardware (on the “bare metal” as they say) and any operating system that runs on the machine must run on top of the type 1 hypervisor. The hypervisor must virtualize all parts of the underlying hardware, enabling each “guest” operating system to access the underlying host without being aware that it is sharing the underlying “host” hardware with any other guests. The hypervisor provides interfaces that simulate the real hardware interfaces, and there is a small cost for this.

Type 2 hypervisors on the other hand run on top of an operating system. That is, a “host” operating system runs directly on the real hardware, and the type 2 hypervisor runs on top of that host OS. The type 2 hypervisor must create a simulation of the hardware that runs on top of the memory management software, and device drivers provided by the host operating system. This simulation introduces significant additional overhead for all of the “guest” OSes hosted by the type 2 hypervisor.

The next step along this continuum is “containerization” (sometimes called “operating system level virtualization”) and it is essentially Linux-only. Although container technologies, like the popular Docker system, are available for Linux, Windows and MacOS, in Windows and MacOS they work by spinning up a Linux VM using a type 2 hypervisor, and then running the Docker system within that Linux VM. Although this technically enables Windows and MacOS machines to run containers, the containers suffer the heavy type 2 hypervisor virtualization penalty that does not occur when containers run on Linux. So let’s look in detail at how containers work, only on Linux (because when containers are on Windows or MacOS they are really on Linux anyway).

There are several different containerization technologies available for Linux, and there is a standardization effort called OCI (Open Container Initiative). Docker is by far the most popular of these container technologies. Containers run directly on top of the system kernel, in just the same way that the host operating system runs on top of the kernel. In a similar way that a type 1 hypervisor runs all guests on top of the hardware, containers are run on top of the kernel, so there is very little overhead. Whereas, type 1 hypervisors run the complete set of software for each guest VM (firmware, kernel, OS distro, and application), containers only contain the last two of those components (OS distro and application) and they run on top of the host firmware and kernel. In some cases (e.g., when developing in a language like Go) even the OS distro can be omitted, resulting in an even smaller and more efficient container. In this environment, the host OS runs beside the containers and on top of the same kernel. Containers are therefore significantly smaller than VMs (often only a few megabytes) and they perform almost as efficiently as similar code running on bare metal. Typically a single hardware machine can host 10 to 100 times as many containers as it could host VMs. Early demonstrations of Docker ran 255 containers on a single 1GB RAM Raspberry Pi computer, for example.

As with the previous options, you can run containers on your own physical computer (after installing Linux), or you can pay to run them on a cloud container service. All of the major cloud players have such container services (AWS, Azure, Google, IBM, RedHat, etc.). Running a container in a cloud service means you no longer have to worry about firmware updates, or kernel updates, since your cloud provider has an obligation to be vigilant about vulnerabilities and exploits in those areas and to keep their machines up-to-date. You only need to worry about your OS distribution’s updates, and your own application-related updates (language subsystem, libraries and application-specific code).

As we have moved along the spectrum, from the left side toward the right, we have moved away from customized hardware, toward hardware independency. We have moved from complete flexibility for your application, through having a few restrictions created by sharing a hypervisor, to requiring a common Linux kernel version. At the same time, we have made it progressively easier to update the application, and reduced the risks of vulnerabilities and exploits in the code we maintain for our application. Let’s keep sliding along this continuum and see what comes next…

Amazon introduced their Lambda service 5 years ago (in November 2014) and ushered-in a new era of “Serverless Computing”. All of the major cloud companies now provide similar services. Serverless Computing abstracts the entire computing environment so you only need to develop your application specific code, and deploy just that code. The underlying hardware, kernel, operating system distro, language subsystem are all provided by the Serverless Computing infrastructure. For example, you might provide a function, written in your favorite language that reacts to an event and takes some action. The event might be a JSON message arriving at an MQTT or Kafka broker, or it might be an HTTP method invocation on a specific web URL. You could for example, implement the cloud side of an IoT system, or even a complete web page interface, solely by building these functions and deploying them.

Using Serverless Computing, you can build these kinds of applications and deploy them on one of these commercial services, and they will be relatively immune to future risks in the language subsystem, OS, kernel, firmware or even hardware. Instead of you needing to remain vigilant for possible vulnerabilities in each of these areas, and keeping each of them up-to-date with the latest patches, you only need to concentrate on your own application code, and leave the rest of the vigilance to your service provider.

With Serverless Computing we have almost made it to the right side of the spectrum. As a software developer, it simply cannot get much simpler than just deploying tiny functions to achieve your application solution. However, there’s an additional technology that I want to mention that seems to be even farther to the right on this continuum.

Voice assistants receive commands from us in a variety of human languages. Most accept directions in at least English. You can configure your voice assistant to take a particular action when a particular event occurs. With tools like IFTTT (IF This, Then That) you can construct elaborate instructions for your assistant to execute when certain events occur. Of course, it does not matter which programming languages the voice assistant was built with, or on which OS, kernel, firmware, or hardware it runs. You simply tell it what to do in natural language, and the assistant will take care of making it happen. Today Voice Assistants are very primitive, and quite limited in what they can do, but in future I think this technology has the potential for building significant applications without ever writing a line of code.

What do you think? I’d love to know your thoughts and insights on this spectrum of bespoke hardware through to voice assistants. Or maybe you can tell me something about the more human spectrum that prompted me to write this article in the first place. Please use the interface below to leave your comments.

Published by

mosquito

An Insignificant Annoyance.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.