Bridging the Gap: AI, Cybersecurity, and the Future of Data Protection

October 2023 is the 20th year of Cybersecurity Awareness Month, and, oh, what a difference twenty years has made in information technology. Google became widely available with its IPO and the launch of Gmail in 2004. The first iPhone was released in 2007, followed by the first Android phone a year later. Bluetooth, Facebook, electric and even driverless cars, 3D printing—all advances that have become part and parcel of our everyday lives in the past twenty years.

At one time we could clearly delineate when we were “online” and “offline.” Today nearly every new device, machine, and appliance can be connected to the internet and/or to each other. Meanwhile, hackers have become more sophisticated in exploiting vulnerabilities, both machine and human. In the widely publicized MGM Resorts ransomware attack in September 2023, the bad actors employed a fairly unsophisticated social engineering technique called vishing. With just a clever phone call to the resorts’ IT support desk and enough publicly available data to sound convincing, they were considered a trusted source and allowed access to key network information.

How AI is Changing Cybersecurity and Information Assurance

Cue the scary Halloween music. It’s no wonder October was chosen to raise awareness for cyber security. The fear is real, and it’s even harder to tell the real from the fake now that AI has entered the arena. We’ve entered the Twilight Zone, where all of our vulnerabilities are being unmasked by artificial intelligence and new issues are taking shape before agencies know how to prepare.

Like the half-mask in Phantom of the Opera, one side of AI has a seemingly endless array of positive benefits, but AI can be ugly when used for nefarious purposes. While organizations are busy training staff on how to identify phishing emails (the ones with the typos), AI can generate emails that are more believable than those drafted by humans. Today’s AI-generated attacks can even emulate voices like your grandmother’s or your network engineer’s and more easily manipulate humans into succumbing to requests, whether they are direct threats for money or routine requests for codes.

Knowing & Safeguarding Data

Don’t blame the messenger…or, in this case, the Language Learning Model. Blocking the use of AI is not the answer. AI-driven systems and solutions enable accuracy and efficiency and empower a faster, more productive workforce. IntelliDyne applies AI capabilities in an ethical manner, leveraging the power of AI to deliver custom risk management, automation, and zero-trust solutions.

Harnessing the power of AI for good uses means getting back to the basics of cybersecurity and information assurance. While cyber security focuses on protecting the doorways to and repositories of data, ensuring that no unauthorized access is allowed into company data stores, information assurance focuses on knowing the type of data stored and used.

  1. Take inventory.
    Many government agencies are still relying on legacy systems that are not well protected and possibly not even accounted for in inventories. Having complete knowledge of your cyber terrain, including the number of devices on your network and those that are IP-enabled, is the first line of defense. Bad actors often get in through “dark” IT assets, the devices you aren’t aware you have. Once you know what’s in your environment, you can take the necessary steps to secure it.
  2. Evaluate your data.
    It’s essential to know what is being fed into the AI model. AI is a predictive, generative Language Learning Model. It’s only going to formulate and learn from what goes into it. What doesn’t go in won’t come out. Much of government data is very sensitive, and the details need to be sanitized before being fed into AI. Eliminating what you don’t want there and removing what already slipped through the cracks are the next steps in securing your AI channels.

Transparency Builds Trust

But how do we trust what AI is doing for us, and how can customers trust AI-generated data? This was one of the hot topics at this year’s Dreamforce Conference, hosted by Salesforce and dubbed “the largest AI event of the year.” As you roll out AI-powered solutions, be forthcoming with your customers when the content is not generated by a human. For example, Bing is upfront about AI-generated search responses in Bing Chat, its AI-driven search engine. Not only does this level-set the AI interaction by telling the user to take it with a grain of salt, transparency strengthens confidence with clients.

AI is revolutionizing how we protect, manage, and utilize data. While AI improves accuracy and efficiency and reduces manual labor, ensuring customer trust in the final output depends on both establishing and constantly updating processes that incorporate the latest cyber security and information assurance best practices. With a comprehensive understanding of your IT environment and diligent management of your data, you can make the most of AI without any frightening surprises, knowing you’re doing the utmost to protect your agency from malicious threats.

Scroll to Top