July 6, 2025
Segment: Flitting Finch | Socrates and AI

Technology is a disruptive force. As a child of the 1990s, the internet grew up with me perhaps more than I did with it. For context, consider that personal computers reached ubiquity in American public schools only a few years before I entered kindergarten in 1992. 

I have many memories of the early internet. The old AOL and Netscape discs, the iconic screeching of a dial-up modem, and the terrible design principles of early websites all stick in my memory. When I was eight or nine years old, I recall searching for “SNES” (Super Nintendo Entertainment System, or so I thought) only to open a page with a Clip Art-quality, flashing sign that read: Suzie’s Nasty Erotic Sexhouse. 

Okay… My dad and uncle got a laugh out of that one. 

As I reflect on my high school and college careers, it’s fully understandable to me why teachers voiced concerns about students overutilizing the internet for research papers. I think all of us who came up during those years remember the mantra of many instructors, “If you use Wikipedia, read the cited articles. Do not cite Wikipedia or you will fail.” In short, do not trust everything you read. 

At the time, though, their trepidation was an unwarranted nuisance to me. My feeling was and is that if you’re a lazy student, the internet will enable said laziness and offer many figurative holes for you to fall into. But if you’re diligent, it’s a hugely useful tool and can help you produce a better product in less time. For the people slightly younger than me, perhaps a more relevant focal point for this same debate is search engines (i.e., the downfalls of “just Googling it”). 

Since the advent of ChatGPT in 2022, the conversation has shifted to the impacts of generative artificial intelligence (GenAI) engines running on top of Large Language Models (LLMs). As anecdotal evidence of this shift, I would like to call out the number of fear-mongering headlines conveying something like, “AI Will Kill All of the Jobs – No One Is Safe!” 

 

Let’s put things in perspective, though. People expressing deep concern about technology’s impact on learning and a functioning society is a (very) longstanding one. 

In Plato’s Phaedrus, Socrates laments the pitfalls inherent in written language: "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” 

At face value, Socrates’ point might seem silly, but I certainly empathize with him. Reading as a medium has clear disadvantages when compared to oral tradition. Yet, I doubt any of us would lament the existence of written language (even though the reader might lament reading this article). 

 

America and the world have survived the arrival of radio, television, computers, the internet, search engines… 

Still, I ask myself: Does GenAI represent the same type of shift and the same type of problems? 

In a way, yes. LLMs and GenAI are cousins of search engines and autocomplete features included in software services like Google’s suite. AI models are simply intaking inputs—data, information, images, files—and a prompt submitted by the user, then generating the most likely-to-fit content as output. 

In industrial settings, a well-trained / tailored AI model offers immense efficiency gains. But, like Socrates (or Plato) did with writing, I have some reservations about this new technology. 

Unlike the mediums listed earlier (e.g., writing, radio), AI models synthesize inputs and create something new. For me, this crosses a line because a student or an entry level worker no longer needs to ingest material, formulate an argument, develop said argument, etc. 

How can young people truly learn to learn? What is the pathway for them to develop their ability to think critically and exercise their “discernment muscles”? How do they enter the workforce and grow within a business? What stops them from slightly tailoring the AI outputs to beat a detector algorithm? 

More pointedly, will overreliance on AI further erode the fundamentals and foundation of learning and critical thinking in our society, reducing our ability to produce high functioning citizens of this culture and constitutional republic? 

I think so… Especially absent appropriate guardrails, monitors, and controls. I really do. American society needs to view AI models as not just an industrial disruptor but a cultural one. I fear the cost of not doing so will be much greater than corporate profit and loss statements. I also fear working with a new generation who know nothing other than what ChatGPT tells them about our company’s operation. 

But what I fear most is that AI seems poised to close off the entry pathways for many young people looking to enter the workforce. The implications of this deeply concern me. Creative solutions must be implemented quickly, or another generation will stumble in their early careers like many Millennials and Zoomers did. 


You, the wise reader, might say: “AI could help students learn, because they don’t have to waste time reading all of the source material to gain an understanding.” 

Or you might argue: “Memorization and oral tradition didn’t vanish because of writing. Radio, television, and the internet are all part of our day-to-day lives. Also, Finch, back in the day there were students who tried to hide lazy citations or pass papers off as their own.” 

And I would reply to you with a very skeptical, “Sure. But this technology is different. This is a mind with no sense of critical thought and no discernment. AI is the epitome of garbage in, garbage out.” 

In closing, I would like to offer a few simple guidelines that I hope someone smarter than me already has in mind and is implementing around the proper use of AI: 

  • Do not be lazy. Know thoroughly the inputs you are handing to the AI model! 
  • Do not trust without verifying. AI can “lie.” It can get things wrong. It can reach the wrong conclusions. 
  • Do not treat AI as static. Give the AI model feedback! Current AI models are not sentient, but they can be “trained” even by basic users through simple discussion (i.e., Why did you give me that? Does that follow the logic or language structure I told you to use?). 
  • Do not accept AI output as delivered. AI-generated content can be a huge accelerator, but in most cases (especially in academia), it should be used as a baseline or starting point in developing your own material. 

Can you think of any other important guidelines for the proper use of AI? Do you also worry about the potential degrading impact this could have on our culture and society? 

I would love to discuss with anyone willing, so please reach out via email, social media, or Discord!