Several Examples of Why AI Is Not Going to Replace Our Jobs
Anytime soon at least.
Disclaimer: This article is not serious.
LLMs are doing some pretty amazing things, and I would be lying if I said they didn’t make my job much easier. I use Chat GPT in a lot of interesting and helpful ways:
- Re-formatting large schemas in SQL.
- Helping me to write Spark Code.
- Telling me jokes / giving me a laugh.
- Summarizing things for me.
- Generating or rewording prewritten paragraphs when writing product specs.
In many of these tasks and more, Chat GPT has been saving me a lot of time. But for a while longer, Chat GPT will only do that — save me time. And so when Scott Galloway, one of my favorite voices in tech, says “Chat GPT isn’t going to take your job, but someone who knows how to use Chat GPT will”, I think that’s a pretty believable statement. But why is that true? Let’s dive into a few examples
Chat GPT simply can’t generate accurate ASCII Art
ASCII art is super time-consuming to make, and I am not much of an artist. So I decided to outsource it to Chat GPT. Here are a few of my recent attempts at getting Chat GPT to create ASCII art.
If your 9–5 is as an ASCII artist — fear not. Your job is safe for now.
Generating Fake Data
I use Chat GPT to create fake data a lot. Even if you’re a fast typer, it’s significantly faster and easier to give Chat GPT some context and have it create manual rows for you. Unless you want to generate more than a few rows, and then you’re in trouble —
No matter how hard I tried, I simply could not get it to generate more data.
Other random requests
Logic Problems
This final example is a bit more serious. Since LLMs are just predicting the next plausible word in a sequence of words, oftentimes they cannot get logic problems correct.
Ultimately, the confidence Chat GPT shows even when it shouldn’t be confident is the reason we aren’t losing our jobs to LLMs anytime soon. Their ability to quickly and accurately generate results is astounding, but there still needs to be a human on the other side of things who understands the problem, checking the computer’s work to make sure it’s accurate. Otherwise, we can’t trust the output of an LLM.