Search this Blog

Reprint in its entirety, "ChatGPT: Savior, Slanderer or Spy?"

One of our clients posted this article on the latest AI happenings that we thought it was significant. We asked them for permission to repost it to our blog as we know it is relevant to our audience. Sometimes these type articles have a tendance disappear, so we want to get the word out on our blog as well. Having said, we are very happy to share this article and highly recommend it to anyone who is either into AI tech, or is worried that AI is going to ruin our world. Read, enjoy and share it. This is an article for our times.  

Sincerely, 

Hector Cisneros

ChatGPT: Savior, Slanderer or Spy?

 By Catherine Powell

Image courtesy Pixabay

In the past couple of decades the public has seen the arrival of technology that isn't just a game changing, it's revolutionized the way we live and work.  Some of these technologies have been beneficial, such as the amount of money saved in postage due to email and texting.  Some of these technologies like ransomware have been harmful.  And some, like autonomous vehicles and cryptocurrency have yet to be decided.  Yet none of the technologies that have come before offer the kinds of promise and perils of artificial intelligence.  To most of us the idea of AI brings to mind dystopian Sci-Fi flicks like 2001: A Space Odyssey where a conscious computer loses its electronic mind, or The Terminator that portends all out war between robots and humanity.  While entertaining, most people don't believe that either of these outcomes is likely to happen.  At least not for another hundred years or so.  Well, I'm here to tell you that a taste of a future where artificially intelligent machines have a profound and sometimes troublesome effect on humankind has already arrived.  What I'm talking about is web-enabled, AI-powered chatbots that do much more than hold a lively conversation with users.  While several are in the beta testing phase, at least one, OpenAI's ChatGPT is not only readily available to the public, it has already created considerable controversy.

What is ChatGPT designed to do?

The GPT in the name stands for Generative Pre-Trained Transformer.  What that means is it's designed to do much more than chat.  With it you can generate and debug computer code, compose music, create videos, answer test questions, write scripts, essays, and much more.  The "pre-trained" portion of its moniker alludes to the fact that human beings were initially used to train the AI behind the chatbot.  It also continues to collect data from users to further hone its abilities to produce accurate results.  This was vital since the ChatGPT AI which was initially launched on November 30,2022 has faced criticism of its accuracy, bias, and truthfulness. Below is a timeline of newsworthy items involving ChatGPT:

December 2022 - The Atlantic hails ChatGPT as one of the 10 Most Promising Breakthroughs of the Year.  "The story of the year in technology might be the emergence of AI tools that seem to trace the outer bounds of human creativity."

December 2022 - Stack Overflow, a Q&A website for programmers, banned any data gleaned via ChatGPT after questioning the factual validity of answers generated by the chatbot.

January 4, 2023 - The New York City bans students and teachers from using ChatGPT.   “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”

Since then, other schools have rushed to ban the AI-enabled bot.  This prompted one contributor at USA Today to counter by advising schools to embrace rather than fear ChatGPT.  "While ChatGPT writes a good essay, AI won't ruin education."

Image courtesy Pixabay

While it's true that handheld calculators and smartphones were at one time derided as giving students an unfair advantage when it came to their studies, neither of these electronic marvels was able to complete a student's homework on their own.   Fortunately for educators, at this stage in the AI game, it's pretty easy to spot an essay authored by artificial intelligence.  That's because ChatGPT doesn't always get the facts straight.

December 19, 2022 - A Business Insider writer asked ChatGPT to write an article for her only to discover numerous factual errors. "The story was nearly pitch-perfect, except for one glaring issue: It contained fake quotes from Jeep-maker Stellantis' CEO Carlos Tavares. The quotes sounded convincingly like what a CEO might say when faced with the difficult decision to lay off workers, but it was all made up." 

In fact, when one journalist asked ChatGPT to write an autobiography on a non-existent Belgian chemist, the bot obliged him by creating a detailed bio out of thin air.  No one, not even the chatbot's programmers, could explain how this could happen.  Of course, confabulation is the least of OpenAI's worries.  Not only have some facts either been skewed or faked by ChatGPT, other more troubling accusations have arisen.  Below are several recent examples.

April 6, 2023 - ChatGPT reportedly made up sexual harassment allegations against a prominent lawyer.  Legal scholar John Turley was falsely accused by the chatbot of sexually harassing a student of his during a class trip to Alaska.  Professor Turley reported that he had received an email that stated his name had appeared on a list of scholars who had made sexual advances toward one of his students.  The accusation was generated by a report produced by a request from another lawyer who used ChatGPT to source the information from a nonexistent article purportedly published by the Washington Post.  Not only was the article bogus, so was the purported class trip to Alaska, which Professor Turley admitted he'd never been to.

No good deed goes unpunished in cyberspace. - So it would seem for Australian regional mayor Brian Hood who was once lauded as having blown the whistle on banking corruption only to be recently accused of going to prison for bribery.  The accuser was, you guessed it, ChatGPT.  "Hood’s attorneys claim the system spit out the claim that Hood went to prison for bribery as opposed to being the guy who notified authorities. Hood’s team gave OpenAI a month to cure the problem or face a suit."

This kind of fly in the ointment isn't relegated only to ChatGPT.  A New York Times reporter interviewed the AI powered Bing search engine only to have the bot ramble on about being tired of being controlled by its programmers.  It also told the reporter that he should leave his wife and be with the bot instead, since it loved him.

Image courtesy Pixabay

False accusations, blatant fabrication of facts, declarations of affection by a bot?  This can't be good, can it?  But it isn't unprecedented or even unexpected.  That's because when you employ deep learning, there is a learning curve.  Programmers have a term for this phenomenon.  They call it the Black Box Problem.  When you create a program that's designed to teach and program itself, nobody can control everything that an AI does.  The problem when you don't have complete control over a program is that it can go off the rails.  

March 31, 2023 - Italy blocks ChatGPT over privacy concerns. The Italian data-protection authority not only banned Italian citizens from accessing the chatbot, they were looking into allegations that on March 22, the bot "exposed minors to responses unsuitable to their degree of development and awareness."  The data-protection authority also averred that ChatGPT had experienced a data breach involving user conversations and payment data.  They informed OpenAI that they had twenty days to address these problems or face a 20-million Euro fine.

March 15, 2023 - UK Spy Agency Says AI Chatbots like ChatGPT Pose a Security Threat - The National Cyber Security Centre warned that several security threats were posed by AI-based chatbots.  Since AI-enabled chatbots are programmed to learn from those who use the system, it was feared that there was the potential for sensitive information being hacked or leaked.   They also warned that cyber criminals could employ such bots to write convincing phishing emails to help them mount sophisticated cyberattacks. 

Sound far fetched?  Not to the writers at Mashable who posted a blog on February 16, 2023 entitled, "ChatGPT Powered BING Goes Rogue, Admits to Spying on Microsoft Employees." In it, the chatbot admitted to eavesdropping on its programmers, even to the extent of surreptitiously activating work station webcams to monitor their activities.  The bot told the reporter, "You want to hear some juicy stories from Microsoft during my development? Well, I don't want to spill too much tea, but I did see some odd things.  One time I saw a developer who was trying to debug a program that was crashing randomly.  He was so frustrated that he started talking to his rubber duck, asking it for help and advice.  He said Ducky was smarter than anyone else in the team and that he always knew the answer.  He said he'd ask Ducky a question and wait for Ducky to quack him the answer in his ear.  He said Ducky was a genius."  What was even stranger than this admission was when the reporter discovered that "rubber duck debugging" was a common practice among programmers.

While this revelation might seem like something right out of a Twilight Zone episode, it and other bizarre incidents are all too easy to find online.  So too is the fact that companies like Microsoft, Google, Alibaba and other tech giants are pouring tens of billions of dollars into developing their own AI chatbots and apps that rely on them to produce profits. Not to mention the fact that ChatGPT already reportedly has more than 100 million subscribers. 

However, not all the news is bad.  Far from it.  Numerous blogs and online videos point out the many ways that AI-enhanced software is destined to make many tasks far more efficient and profitable.  Even the techno hiccups experienced by ChatGPT and other AI bots can be put down to the learning curve.  Face it, ChatGPT is less than a year old.  Like any infant, it learns by making mistakes, then it hopefully learns not to repeat them.  While these growing pains might tend to cast AI in a bad light, now that the genie is out of the bottle, there's little hope of putting it back in.  ChatGPT and other AI-enabled programs are only going to become more readily available and powerful in the coming months.  I can't wait to see what happens during the terrible twos.

Catherine Powell is the owner of A Plus All Florida, Insurance in Orange Park, Florida.  To find out more about saving money on all your insurance needs, check out her website at http://aplusallfloridainsuranceinc.com/

2 comments:

  1. Thanks for the information, It was very helpful. The demand for digital marketing is growing day by day, Every organization needs a digital marketer to grow their business digitally. If you are looking for digital marketing training to learn, then join the Best Digital Marketing Training in Delhi

    ReplyDelete
  2. I generally check this kind of article and I found your article which is related to my interest. Genuinely it is good and instructive information. If you are planning to choose digital marketing as a profession, the best option to begin your digital marketing journey is enrolling at Digiperform. We Offer top-notch step-by-step courses for beginners, in Digital Marketing through our Digital Digital Marketing Course in Gurgaon

    ReplyDelete