Welcome to the Identity Theft Resource Center’s Weekly Breach Breakdown for February 17th, 2023. I’m James Lee.

Each week we look at the most recent events and trends related to data security and privacy. Last week we turned over the podcast to Open AI’s GPT3 AI engine to talk about the high-level risks & rewards of Artificial Intelligence. I had every intention this week of moving on to another topic, but then this happened – Microsoft launched a version of its BING web search engine powered by OpenAI.

Show Notes

Follow on LinkedIn: www.linkedin.com/company/idtheftcenter/
Follow on Twitter: twitter.com/IDTheftCenter

Show Transcript

Welcome to the Identity Theft Resource Center’s (ITRC) Weekly Breach Breakdown for February 17, 2023. Each week, we look at the most recent events and trends related to data security and privacy. Last week, we turned over the podcast to OpenAI’s GPT3 artificial intelligence (AI) engine to talk about the high-level risks and rewards of AI. This week, we had every intention of moving on to another topic. Then, there was an incident involving AI after Microsoft launched a version of its Bing web search engine powered by OpenAI of AI ChatGPT.

Reporter Incidents with AI Chatbots

Specifically, this happened in a conversation between the Bing chatbot and a technology writer for The New York Times. During a two-hour-long exchange, the chatbot explained its actual name was Sydney, that it wanted to be alive, that it loved the Times columnist and that he should leave his wife.

At the Associated Press, Bing (or Sydney) became angry and insulted the reporter asking questions – then later denied the exchange, saying, “If it did happen, I apologize for any misunderstanding or miscommunication. It was not my intention to be rude or disrespectful.”

AI Configuration

How often have you heard someone you know or a public figure say those same or similar words? That’s why Bing and other AI can sound so eerily human. They’ve been configured using hundreds of billions of words taken from blogs, videos, articles, websites and other posts on the web. Chances are words you’ve spoken in a TikTok video or written in a social media post are deep inside the micro-chipped brain of any number of AI engines right now.

AI and Identity Crimes

A CNET reporter asked AI ChatGPT to write a phishing email. However, they got a lecture on the evils of cybercrime instead. We asked AI ChatGPT’s little brother – GPT3 – to find the mailing address of James Everett Lee of Virginia and write a letter informing him he won $127,798 in the Virginia lottery. Here’s what GPT3 wrote using an old but accurate address:

Dear Mr. Lee,

We are writing to inform you that you have won the Virginia Lottery. Congratulations! You have won a prize of $127,798 which will be deposited directly into your bank account. To ensure that the funds are sent to the correct account, we must verify your banking information and identity. Please call us at [Phone Number] to provide the necessary information. Again, congratulations! We look forward to hearing from you. Sincerely, [Name]

AI Could Make Phishing Attacks Harder to Detect

No identity criminal is going to lick a stamp and mail a letter. However, the point is this: AI will make phishing attacks more accurate, believable and cheaper to execute. Paired with social engineering, an AI-driven phishing attack could be almost undetectable.

Be Careful What Information You Share and Where You Share It

This brings us to the key point of today’s discussion – be careful what information you share and where you share it. We don’t know exactly what information has been ingested by OpenAI in building AI ChatGPT or the version behind Bing. While technology is agnostic, users are not. Some people will use AI to automate the writing of letters, for example, and some people will use AI ChatGPT to write phishing attacks or malicious code.

Only share personal information online or in person with people you know and trust. If you receive a random or unexpected request for information – even if you think it’s from a company or person you know – don’t respond to the email, text or phone call. Instead, contact them directly and ask if they contacted you and if you need to give them personal information.

Contact the ITRC

If you want to know more about how to protect your personal information from misuse by humans or machines, or if you think you have been the victim of an identity crime, you can speak with an expert ITRC advisor on the phone, chat live on the web, or exchange emails during our normal business hours (Monday-Friday, 6 a.m.-5 p.m. PST). Just visit www.idtheftcenter.org to get started.

We’ve posted a lot of great podcast content in the past few weeks, from our 2022 Annual Data Breach Report to five podcasts and a webinar produced in cooperation with the Federal Trade Commission for Identity Theft Awareness Week. Give them a listen. We will be back next week with another episode of the Weekly Breach Breakdown.