If there’s one phrase that represents innovation in the last couple years, it’s "Artificial Intelligence”. From smartphones, self-driving cars, and even coffee machines, nearly every new product launching to market tries to integrate it. Companies are rushing to adopt it, headlines celebrate its breakthroughs, and many of us interact with AI without even realizing it. But as the excitement grows, so do the questions. Is AI truly transforming the world, or is it still more impractical than we’d like to admit?
One of the biggest concerns of the rise of Artificial Intelligence is the impact it has on the job market. A December 2023 survey by EY US found that 65% of US employees are anxious about AI replacing their specific jobs, and 75% are concerned that AI will make certain jobs obsolete in general [Source]. In fact, we’ve seen almost 94,000 positions being eliminated this year alone with AI listed as a contributing factor. With statistics like these, it is easy to fall into a trap of “doom and gloom” and believe that many of our careers are over before they even started. However, it’s also important to note the inconsistency of companies’ decisions on using AI. For instance, the “buy-now pay-later” giant Klarna laid off nearly 700 employees after their CEO bragged that he hadn’t hired a human in a year, only to rollback just several months later. Fast-food rivals McDonald’s and Taco Bell also tried integrating AI into their drive-thru system, but covered their attempts immediately after realizing its inconsistency. Putting these case studies aside and considering the broader industry, surveys from MIT have found that nearly 95% of corporate AI rollouts are failing to deliver expected returns.
Why?
It’s because AI is expected to act as a person, but sacrifice all the elements that make us human. This idea also begs the question, is AI really "Artificially Intelligent” then?
I say no. We type a prompt into any popular LLM and are instantly astonished by how it talks and responds to us, but there is really nothing special about it; there is never an original thought or a thoughtful response. When I first learned about how LLMs worked, I found it helpful to think of them as search engines that summarize information from the top results. Models are trained on information available (both public and private) and can be guided to recognize patterns, but it is impossible to teach it to create ideas of its own. Every output from any AI model today is simply a compilation of information that was written and published by a human being. It does not possess the ability to create new pieces of information, whether that be art, music, poems, or even lines of code.
Moreover, one of the lesser considered aspects of the use of AI is its impact on the environment. Mark Moran (Head of Data, Systems & Intelligence at Bayer, ex. John Deere) gave a talk at UIUC just several weeks ago discussing the implications that AI has on our future. Outside of the effects that AI might have on career opportunities, Moran spoke about the true costs of operating accessible models. For instance, a single text prompt to Google's Gemini system uses about 0.24 watt-hour (same as a high-efficiency LED bulb for several minutes), while a single AI image generation can use the energy equivalent of fully charging a smartphone. Outside of prompting alone, the rise of new data centers all across the globe implies a new type of energy crisis: power stations cannot supply the massive amount of energy that new data centers need in current towns. AI data centers strain power grids by rapidly increasing electricity demand, especially in regions with high concentrations of facilities. Local governments are now faced with the issue of either increasing electricity costs for residents or gambling millions of dollars into building new energy infrastructure. If AI does end up cementing itself in today’s world, it needs to be more energy conscious of the world around it.
Everything said above is the opinion of of a college student who eats noodles and tacos everyday, so it's definitely not the most reliable perspective. However, I was extremely fortunate to meet Arvind Krishna (the CEO of IBM Computers) and Sidney Lu (CEO of Foxconn Interconnect Technologies, also has a building named after him on campus!). When asked about their thoughts, they both answered that AI is a tool that still needs refining, but is here to stay. Just food for thought (get it? Better than my food for sure).
Regardless of whether AI continues to be groundbreaking or just another tech fad, it is important for us to take a step back and look at the bigger picture. The hype around AI can sometimes overshadow the very real challenges it poses, from issues of privacy, questions about job displacement, and plans for energy needs. Aside from AI itself, the power and influence wielded by these Big Tech companies are also worth scrutinizing, especially in terms of how they handle our data and shape public views. Ultimately, AI’s place in today's society is still to be generated.
Think big, think bold. -Arvind Krishna when asked for best advice ever given