Non-technically speaking, an embedded system are hand sized green circuit boards which make all the decisions in a system. Embedded systems are not only found everywhere, but ever since their inception, they have revolutionized our lives. Upon their debut, for example, embedded systems made an immediately impact in the electronic industry by replacing analog consumer electronics (CRT television, VCR, cassette tape, analog camera, analog clock, analog telephone, etc.) with their digital counterpart (television, DVD, CD, digital camera, digital clock, cellphone, etc.). Moreover, some embedded systems are themselves a finished product (television remote control, digital wrist watch, calculators, etc.) while in other instances the embedded system are adhered to the appliance and control the electromechanical components (actuator, solenoid, electric motor, server motor, etc.) and in the process making the system to run more efficiently. Furthermore, embedded systems come in all shapes and sizes. Some embedded systems are small, low powered and battery operated (television remote control, digital wrist watch, calculators, etc.) while others are larger, powerful and power hungry (cell phones, tablets, etc.). At other times the embedded system invented a completely new market (Quadcopter, Table Ordering Tablet, etc.).
There was a point in time where it seemed like every semiconductor company had designed its own MCU, MPU DSP, etc. To see why every semiconductor company had jumped the CPU bandwagon, it is vital know to understand the bigger picture. One reason why every semiconductor company was designing its own CPU is because to them, it cost zero investment dollars to design a CPU. That is because it takes little to no effort semiconductor company's part to switch from manufacturing OpAmps to manufacturing CPUs. It takes the same number of steps it takes to manufacture a diode, transistor, and an OpAmp, as it takes to manufacture a CPU. The only thing semiconductor companies had to do was find CPU diagrams. Luckily, even back in those days, open source CPU diagrams existed. In fact, universities would use these open source CPU to teach computer architecture courses. Another reason every semiconductor company was manufacturing its own CPU, is because as it turns our CPU industry is a very lucrative yet profitable business. Open up any embedded system and by and large the most expensive and profitable hardware component will be the CPU. Passive components for the most part cost less than 10 cents, active components 50 cents, whereas CPU can cost as much tens of hundreds of dollars. Take for instance the average price of a MCU in 2013 was $7.50, however the cost to manufacture it hovers at around $0.50. That is a $7 profit per chip sold. Yikes! A “design win” for any semiconductor company would rake in in millions if not billions of dollars in profit. Double yikes. Back in those days, to the semiconductor industry, design wins was like printing money.
When designing an embedded system, hardware engineers must decide not only which CPU we are going to use, but from which semiconductor company its going to come from. Go back a couple of years it almost felt like every semiconductor company was manufacturing their own CPU. Take for instance the MCU market; Texas Instruments had MSP430 lineup, Atmel had the Atmega, Microchip had PIC, Freescale had HCD12, and so on. In all honesty, many when selecting a CPU from a particular company because of familiarity with it. Furthermore, not only did every semiconductor company manufacture their own CPU, but each marketed it as the lowest cost, lowest power, highest throughput CPU on the market. As hard as it is to believe, it is somewhat difficult to benchmark CPU metrics. Furthermore, in instance where you can benchmark a specific metric, in all honesty most of the time the results are negligible. Many times hardware engineers felt overwhelmed by the bewildering array of CPUs. Many hardware engineers felt that the market was saturated with CPUs. Many times an embedded system designer was unsure if, at the end of the day, they were selecting the best CPU for the job.
Every successful embedded system goes through several revisions throughout its run. Typically, the first revision of any embedded system will always be prototype. The prototype embedded system test the waters to see if the proof of concept. If the first revision is successful, then a second revision fallow through. Typically, every revision means an improvement in the number of features. This results in requiring a larger CPU. For example, the revision history from an embedded system might go from a 8-bit MCU to 16-bit MCU, 16-bit MCU to 32-bit MCU, or 32-bit MCU to 32-bit MPU. A revision might mean going from a or 16-bit Atmel MCU to 16-bit MSP430 MCU, etc.
Should a product be fortunate enough to be successful, then a revision is bound to happen sooner or later. Revisions are not only unavoidable but they are a sign of successful product. Any embedded system that is not going through a revision, or has no plans for a revision in the foreseeable future, is an early sign of trouble. A revision can take the form of a software revision, hardware revision or both hardware/software revision. A software revisions almost always implies a hardware revision. What is more, any type of revision implies a CPU upgrade. Almost always, a hardware revision implies a CPU performance upgrade. For example, in an embedded system experiencing a hardware revision might, for example, mean upgrading, say the MCU from an 8-bit to 16-bit, 16-bit to 32-bit, or 32-bit MCU to 32-bit MPU. Of course, hardware revision doesn't necessarily mean upgrading to a better performance CPU. Sometimes, a hardware revision might mean switching CPU vendors, such as going from a 16-bit Atmel MCU to 16-bit MSP430 MCU. Many times the reason for switching vendors is because of cost saving reasons. Of course, upgrading an embedded system CPU is more easier said that done. As was mentioned previously, back in those days, every semiconductor company were concurrently yet however independently designing their own CPU. Each CPU had it own instruction set (RISC vs CISC), architecture (vonNewman vs Harvard), compiler, debugger, etc. What this basically all meant is that code was not compatible. Code that was written for one CPU, say an 8-bit Atmega MCU, could not run in another CPU, say Microchip PIC 8-bit MCU. As unbelievable as it may sound, sometimes the same headache experienced “porting” code from one semiconductor company CPU to another semiconductor company CPU would also be experienced when porting from code from CPUs from manufactured by the same semiconductor company, say Atmel 8-bit MCU to Atmel 32-bit MCU. Ask any seasoned firmware engineer and they will attest that porting over firmware from one CPU to another is the most time consuming, frustrating, very cumbersome, monumental task. Ask any seasoned engineer and they would attest they would try to delay the porting as much as possible.
Every CPU is released with something called a datasheet. A CPU datasheet is a document which explains everything that needs explaining on that CPU. Source voltage (2.7, 3.3, 5 Volts), serial peripherals (I2C, SPI, UART, etc), basic peripheral (PORTS), advance peripherals (ADC, DMA), registers (Data Direction Register, I/O), primary memory (static RAM, dynamic RAM), secondary memory (EEPROM, FLASH, FRAM, etc), CPU buss width (8-bit, 16-bit, 32-bit, etc), CPU architecture (Harvard, vonNewman, etc.), instruction set (RISC, CISC, etc.), IC Packaging (DIP, BGA, etc.), and the list goes on and on. Unlike sensor IC datasheets which is tens of pages long, CPU datasheets are hundreds if not thousands of pages long. In all honestly, CPU datasheets feel more like a book rather than a document. Furthermore, it contains highly technical information that in reality takes time to truly comprehend. Reading a datasheet is less of like reading a history book and more of like reading a mathematics, physics and chemistry book combined; reading it is a painstakingly slow and frustrating process. In order to truly write efficient CPU code, a firmware engineer must truly comprehend a CPU datasheet. Now writing truly efficient firmware code means reading and mastering the CPU datasheet. Porting code means means reading and master two CPU datasheets. At the time the landscape was chaotic to say the least. And then came a ARM, the knight and charming armor for a firmware engineer, but not necessary for a semiconductor company.
ARM, which stands for Application Real-time and Microcontroller, was, and still is, research and development company designs CPU that works in conjunction with semiconductor companies. ARM can not exit without semiconductor companies but semiconductor companies can, and did, exist without ARM. ARM is a company that is completely dependent on semiconductor companies but itself is not a semiconductor company. Nowadays, ARM and semiconductor companies are besties but there was a time when that was not the case. Whether or not ARM was aware porting hell nightmare that existed , or if they happen to come at the precise time, only they know. What is known is that they resolved an issue and shook up a well established industry. Instead, ARM main focuses was on CPU research.
Initially semiconductor companies were reluctant to license and manufacture ARM cores. After all why should they? Licensing meant sharing the profits and it meant removing the differentiator should they license other architecture when they have their own architecture. After all why should they. They make much I will strike a cord to semiconductor companies when I say this but since the advent of ARM, is there any difference when purchases a Cortex M CPU from TI than from Microchip?
As time passed by, semiconductor companies where forced to not only license CPU architectures but also, saw a decline for the demand . Customers saw the advantages of ARM cores in terms of standardization, the scalability the ARM offered.