Have you heard of ENIAC computer? Known to many, this groundbreaking piece of equipment helped launch the computer age in 1946 with its 27 tons and 1,800 square foot mainframe weighing in at 27 tons and using 200 kW of electricity – it truly set new standards as it became the world’s first programmable general purpose electronic digital computer!
ENIAC generated headlines which are strikingly familiar in today’s landscape of AI.
Popular Science Monthly in April 1946 heralded this trend with these words: “With computers doing most of the tedious calculations on problems that had baffled men for so long, today’s equation may become tomorrow’s rocket ship!”
“U. of P’s 30-Ton Electronic Brain Can Think Faster Than Einstein,” announced the Philadelphia Evening Bulletin.
Today, over 75 years later, the Cortex-M4 chip that powers your smart refrigerator is more than 10,000 times faster than ENIAC — using only 90uA/MHz and taking up only inches of space. That’s because as computer tech developed and devices became increasingly focused and efficient at fulfilling specific tasks more cost effectively and quickly than before.
Technology Specialization Its Artificial intelligence (AI) has recently generated much excitement, optimism and anxiety alike; particularly since generative AI has grown immensely popular over the past year. If we want to understand its long-term trajectory more accurately, computing hardware history provides much to gain insight into it; in fact it follows a similar path where big, powerful technologies first start off large-scale before starting to specialize and localize to provide efficient edge cases more readily available to use.
From large telephone switchboards to smartphones, power plants to residential solar panels, broadcast television and streaming services – we introduce things big and expensive before beginning an iterative process of refining them. AI is no exception – in fact the very large language models (LLMs) that underpin AI are already so large as to become unwieldy; to solve this we need specialization, decentralization and democratization of AI technology for specific use cases — something known as edge AI.
LLMs (Generative Pre-trained Transformer) have made AI possible in our modern era, by training on massive datasets to understand, generate and interact with human language – effectively blurring the distinctions between machines and human thought. They present both promise and challenges.