My default image
Third slide

ARM vs Others

Introduction

Non-technically speaking, an embedded system are hand sized green circuit boards which make all the decisions in a system. Embedded systems are not only found everywhere, but ever since their inception, they have revolutionized our lives. Upon their debut, for example, embedded systems made an immediately impact in the electronic industry by replacing analog consumer electronics (CRT television, VCR, cassette tape, analog camera, analog clock, analog telephone, etc.) with their digital counterpart (television, DVD, CD, digital camera, digital clock, cellphone, etc.). Moreover, some embedded systems are themselves a finished product (television remote control, digital wrist watch, calculators, etc.) while in other instances the embedded system are adhered to the appliance and control the electromechanical components (actuator, solenoid, electric motor, server motor, etc.) and in the process making the system to run more efficiently. Furthermore, embedded systems come in all shapes and sizes. Some embedded systems are small, low powered and battery operated (television remote control, digital wrist watch, calculators, etc.) while others are larger, powerful and power hungry (cell phones, tablets, etc.). At other times the embedded system invented a completely new market (Quadcopter, Table Ordering Tablet, etc.).

Sensor Circuits
Technically speaking, an embedded system is a digital circuit board with a variety of electrical components. Some of the electrical components are passive (resistors, capacitors, inductors, transformers, etc.) while others are active (diode, transistor, LED, IC, MCU, MPU, etc). All of the electrical components are mechanically support by a printed circuit board, or PCB for short. The PCB not only provides the mechanical support for the electrical components but also electrically interconnects components them by way of copper traces. In some instances, the passive components play a supporting role (bypass capacitor, decoupling capacitor, coupling capacitor, shunt reference, etc) to the active component (Slave ICs, Master ICs, Analog ICs, etc.). In other instances the passive and active components are combined in such a way as to form a circuit. The circuits range everywhere from primitive (voltage divider, low-pass filter, high-pass filter, etc.), semi-advanced circuits (inverter amplifier, non-inverting amplifier, differential amplifier, unity gain, summing amplifier, integrator, wheatstone bridge, etc.) and finally advanced circuits (ECG, PPG, EMC, EEG, etc.). Furthermore, primitive circuits are the building blocks for semi-advanced circuits, and primitive circuits and semi-advanced circuits form the building blocks for advanced circuits. It is also worth mentioning that nine times out of ten, primitive circuits, semi-advanced circuits and advanced circuits are not only analog, but also form sensor circuits. Light intensity, humidity, temperature, pressure, sound, ECG, PPG, etc are all examples of analog signals that we are interested not only in measuring, but also processing and recording. The sensor circuits that need to detect and measure these analog signals must themselves be analog. Sensor circuits encoded the analog information they are measuring into an analog voltage signal.

Advanced circuit, unlike primitive and semi-advance circuits, are unique in that they they can serve as a finished product. For examples, an ECG machine is nothing more than an advanced ECG circuit. At some point semiconductor companies made the realization that not only did a lot of PCBs contained these sensor circuits (ECG, PPG, etc.), but that it would take little to no effort on their part to port these sensor circuits into a single IC chip. After all, the fact that they were present in many PCBs proved that there was already a market for them. Furthermore, simple math dictated the profits margins to be astronomical. Hardware wise, theses hardwired ICs not only drastically simplified schematic design, but also simplified layout design, reduced PCB size, and guaranteed functionality. No longer did PCB designers had to be experience seasoned analog PCB designers and seasoned circuit designers to incorporate advance circuits into their PCB. Technically speaking, all that needed to be done to incorporate an advanced circuit into a PCB was drop in the single chip IC, provide power and ground. However, the two biggest drawbacks with these single chip solutions is that they completely lacked scalability and are highly expensive when compared to their discrete counterparts. Examples of these single IC chip solutions include but not limited to ADS1292R, AFE4400, etc.

So what is an embedded systems? An embedded system is a complex circuit board which contains of a variety of sensor circuits. As discussed previously, the sensor circuits come in all shapes and sizes. The sensor circuits range in complexity (primitive to complex), type (analog to digital), and construction (discrete or integrated). Another name for sensor circuits is slave circuits as they contain dumb circuitry. The slave circuits sprinkled throughout a PCB and unsurprisingly, each adds a specific capability to the embedded system. However, the heart and soul of every embedded system lies in the CPU. Every embedded system has one and only one CPU. The CPU is by far the most expensive circuit found on most embedded system. Furthermore, unlike slave circuits, which can be small, simple, analog and constructed out of discrete components, CPU is strictly large, complex, digital and only available as IC variant. Moreover, schematically speaking, the CPU is typically placed on the center of the PCB and it is surrounded by slave circuits. Every slave circuit is not only electrically connected (copper trace) to the CPU, but also communicates with the CPU either through serial communication protocol (I2C, SPI, UART, etc.) or analog. Because of their size, complexity and cost, for the longest time only semiconductor companies had the capital, experience, tooling, and skill-set needed to design an CPUs.

Currently, embedded system can be equipped with one of five types of CPUs; Microcontroller (MCU), Microprocessor (MPU), Digital Signal Processor (DSP), Graphic Processing Unit (GPU) and Field Programmable Gate Array (FPGA). Not one CPU is better than the other, but rather each was designed for a specific purpose. Take for instance the MCU which is an all in one (CPU, RAM and FLASH) single application specific purpose processor. The advantages of MCUs is that they are physically compact, low cost and low power, however, their disadvantage is they lack scalability, are low in primary memory, secondary memory and processing power. Examples of MCU based embedded systems include but not limited to: thermometers, thermostats, digital watches, television remote control, digital weight scales, etc. Conversely, MPUs are multi-application general purpose processor. Unlike an MCU, MPUs have their CPU, primary memory and secondary memory in discrete IC chips. This configuration allows primary and secondary memory scalability. The advantages of MPU is that they have high processing power, decent primary memory, decent secondary memory, however their disadvantage is that they have high power consumption and their IC package is physically larger. Examples of MPU based embedded systems include but not limited to: Cell Phone, Tablet, PDAs, etc. DSP are ideal for applications which require real time processing such as anti-lock brakes, radio waves transmission/reception, audio compression/decompression, etc. Finally, we have GPU are ideal for applications requiring 3d graphics such as modern video games. Some semiconductor companies designed one type of CPU, while others designed two types of CPUs, while still others designed three types of CPUs. Take for instance Texas Instruments, which manufactured MCU (MSP430), MPU (Sitara) and DSP (C2000). Worst yet, some semiconductor companies designed different versions of the same type of CPU. Take for instance Atmel which, at one point, manufacture 8-bit, 16-bit and 32-bit MCUs. The simplest CPU to both design and manufacture would have to be the MCU. At one point, every semiconductor company manufactured the MCU; Texas Instruments had MSP430 lineup, Atmel had Atmega, Microchip had PIC, Freescale had HCD12, and so on. At that time, every embedded engineer had several not only several CPU options, but several CPU vendor options when designing their embedded system.

Semiconductor Companies

There was a point in time where it seemed like every semiconductor company had designed its own MCU, MPU DSP, etc. To see why every semiconductor company had jumped the CPU bandwagon, it is vital know to understand the bigger picture. One reason why every semiconductor company was designing its own CPU is because to them, it cost zero investment dollars to design a CPU. That is because it takes little to no effort semiconductor company's part to switch from manufacturing OpAmps to manufacturing CPUs. It takes the same number of steps it takes to manufacture a diode, transistor, and an OpAmp, as it takes to manufacture a CPU. The only thing semiconductor companies had to do was find CPU diagrams. Luckily, even back in those days, open source CPU diagrams existed. In fact, universities would use these open source CPU to teach computer architecture courses. Another reason every semiconductor company was manufacturing its own CPU, is because as it turns our CPU industry is a very lucrative yet profitable business. Open up any embedded system and by and large the most expensive and profitable hardware component will be the CPU. Passive components for the most part cost less than 10 cents, active components 50 cents, whereas CPU can cost as much tens of hundreds of dollars. Take for instance the average price of a MCU in 2013 was $7.50, however the cost to manufacture it hovers at around $0.50. That is a $7 profit per chip sold. Yikes! A “design win” for any semiconductor company would rake in in millions if not billions of dollars in profit. Double yikes. Back in those days, to the semiconductor industry, design wins was like printing money.

When designing an embedded system, hardware engineers must decide not only which CPU we are going to use, but from which semiconductor company its going to come from. Go back a couple of years it almost felt like every semiconductor company was manufacturing their own CPU. Take for instance the MCU market; Texas Instruments had MSP430 lineup, Atmel had the Atmega, Microchip had PIC, Freescale had HCD12, and so on. In all honesty, many when selecting a CPU from a particular company because of familiarity with it. Furthermore, not only did every semiconductor company manufacture their own CPU, but each marketed it as the lowest cost, lowest power, highest throughput CPU on the market. As hard as it is to believe, it is somewhat difficult to benchmark CPU metrics. Furthermore, in instance where you can benchmark a specific metric, in all honesty most of the time the results are negligible. Many times hardware engineers felt overwhelmed by the bewildering array of CPUs. Many hardware engineers felt that the market was saturated with CPUs. Many times an embedded system designer was unsure if, at the end of the day, they were selecting the best CPU for the job.

Every successful embedded system goes through several revisions throughout its run. Typically, the first revision of any embedded system will always be prototype. The prototype embedded system test the waters to see if the proof of concept. If the first revision is successful, then a second revision fallow through. Typically, every revision means an improvement in the number of features. This results in requiring a larger CPU. For example, the revision history from an embedded system might go from a 8-bit MCU to 16-bit MCU, 16-bit MCU to 32-bit MCU, or 32-bit MCU to 32-bit MPU. A revision might mean going from a or 16-bit Atmel MCU to 16-bit MSP430 MCU, etc.

The problem

Should a product be fortunate enough to be successful, then a revision is bound to happen sooner or later. Revisions are not only unavoidable but they are a sign of successful product. Any embedded system that is not going through a revision, or has no plans for a revision in the foreseeable future, is an early sign of trouble. A revision can take the form of a software revision, hardware revision or both hardware/software revision. A software revisions almost always implies a hardware revision. What is more, any type of revision implies a CPU upgrade. Almost always, a hardware revision implies a CPU performance upgrade. For example, in an embedded system experiencing a hardware revision might, for example, mean upgrading, say the MCU from an 8-bit to 16-bit, 16-bit to 32-bit, or 32-bit MCU to 32-bit MPU. Of course, hardware revision doesn't necessarily mean upgrading to a better performance CPU. Sometimes, a hardware revision might mean switching CPU vendors, such as going from a 16-bit Atmel MCU to 16-bit MSP430 MCU. Many times the reason for switching vendors is because of cost saving reasons. Of course, upgrading an embedded system CPU is more easier said that done. As was mentioned previously, back in those days, every semiconductor company were concurrently yet however independently designing their own CPU. Each CPU had it own instruction set (RISC vs CISC), architecture (vonNewman vs Harvard), compiler, debugger, etc. What this basically all meant is that code was not compatible. Code that was written for one CPU, say an 8-bit Atmega MCU, could not run in another CPU, say Microchip PIC 8-bit MCU. As unbelievable as it may sound, sometimes the same headache experienced “porting” code from one semiconductor company CPU to another semiconductor company CPU would also be experienced when porting from code from CPUs from manufactured by the same semiconductor company, say Atmel 8-bit MCU to Atmel 32-bit MCU. Ask any seasoned firmware engineer and they will attest that porting over firmware from one CPU to another is the most time consuming, frustrating, very cumbersome, monumental task. Ask any seasoned engineer and they would attest they would try to delay the porting as much as possible.

Every CPU is released with something called a datasheet. A CPU datasheet is a document which explains everything that needs explaining on that CPU. Source voltage (2.7, 3.3, 5 Volts), serial peripherals (I2C, SPI, UART, etc), basic peripheral (PORTS), advance peripherals (ADC, DMA), registers (Data Direction Register, I/O), primary memory (static RAM, dynamic RAM), secondary memory (EEPROM, FLASH, FRAM, etc), CPU buss width (8-bit, 16-bit, 32-bit, etc), CPU architecture (Harvard, vonNewman, etc.), instruction set (RISC, CISC, etc.), IC Packaging (DIP, BGA, etc.), and the list goes on and on. Unlike sensor IC datasheets which is tens of pages long, CPU datasheets are hundreds if not thousands of pages long. In all honestly, CPU datasheets feel more like a book rather than a document. Furthermore, it contains highly technical information that in reality takes time to truly comprehend. Reading a datasheet is less of like reading a history book and more of like reading a mathematics, physics and chemistry book combined; reading it is a painstakingly slow and frustrating process. In order to truly write efficient CPU code, a firmware engineer must truly comprehend a CPU datasheet. Now writing truly efficient firmware code means reading and mastering the CPU datasheet. Porting code means means reading and master two CPU datasheets. At the time the landscape was chaotic to say the least. And then came a ARM, the knight and charming armor for a firmware engineer, but not necessary for a semiconductor company.

ARM, which stands for Application Real-time and Microcontroller, was, and still is, research and development company designs CPU that works in conjunction with semiconductor companies. ARM can not exit without semiconductor companies but semiconductor companies can, and did, exist without ARM. ARM is a company that is completely dependent on semiconductor companies but itself is not a semiconductor company. Nowadays, ARM and semiconductor companies are besties but there was a time when that was not the case. Whether or not ARM was aware porting hell nightmare that existed , or if they happen to come at the precise time, only they know. What is known is that they resolved an issue and shook up a well established industry. Instead, ARM main focuses was on CPU research.

Semiconductor Reluctance

Initially semiconductor companies were reluctant to license and manufacture ARM cores. After all why should they? Licensing meant sharing the profits and it meant removing the differentiator should they license other architecture when they have their own architecture. After all why should they. They make much I will strike a cord to semiconductor companies when I say this but since the advent of ARM, is there any difference when purchases a Cortex M CPU from TI than from Microchip?

Customer demand

As time passed by, semiconductor companies where forced to not only license CPU architectures but also, saw a decline for the demand . Customers saw the advantages of ARM cores in terms of standardization, the scalability the ARM offered.