Just ask Google and Microsoft. Business is struggling to keep up with developments in artificial intelligence. In a relatively short time, Google Search is retooling to provide visitors with AI-infused multimedia content. Microsoft is rapidly integrating AI- capability into Office365 and Bing search.
In only three months, astrophysicist Neil DeGrasse Tyson went from telling TMZ, “if AI eventually takes over all jobs, we can all go to the beach. After all, going to work is not embedded in our DNA, so society could reimagine how humans live”…
…to telling Fox News recently, “Part of me wonders, maybe AI will create such good fakes that no one will trust the Internet anymore for anything, and we just have to simply shut it down. Maybe it's the final nail in the coffin in the internet."
At Aware Force, we provide companies and organizations with relatable cybersecurity content, including videos, quizzes, and actionable one-sheets that engage their employees. Our initial thought was that AI wasn’t a significant threat (yet) to our business model because we source all our content. ChatGPT and other AI-based platforms scrape the web and deliver generic answers, and as a result, the content often contains errors or pure opinions.
But during Google I/O recently, the company announced their upcoming chatbot technology will provide information about sources. So, AI will become, over time, an ever more reputable resource.
That means our team must keep improving, delivering human-generated content that makes readers say, “Wow!” We have been using AI as a game changer in improving the quality of images, especially headshots of employees. We also find that AI-generated voices can be useful in replacing closed captioning in non-English language versions of our cybersecurity videos with voice narration.
Artificial intelligence is also very good at improving the sentence structure of our content, both spelling and punctuation. We also use it to improve the content provided by our customers for insertion in their Aware Force newsletters.
Where is AI coming up short, at least right now? Generating cybersecurity content, for one thing. The technology spits out text that matches the parameters it is given. But the text must be edited, sourced, and improved with additional, more recent facts. Image creation looks great in many examples, but try to create one yourself. If you’re not trained in instructing the technology — an arduous process — AI-generated images stink.
Artificial humans are good for presentations, but your brain can spot shortcomings. Yes, the technology will rapidly improve (the company Synthesia is now valued at close to $1 billion). But right now, we’re not there.
AI's biggest danger to cybersecurity professionals is leveling up the playing field for phishing. Employees who fall for somewhat unbelievable emails and texts will be more apt to respond to AI-generated threats.
That will force us all on the right side to up our game. Artificial Intelligence won’t kill our jobs. It will create more. It will refine our roles, pushing us to innovate and adapt.
As we said before: AI is growing in ways we didn't expect. Ignoring the role it plays in the shifting cybersecurity landscape is a dangerous game to play.
According to a CNBC article, generative AI is making ransomware attacks and phishing schemes easier to deploy. Companies are employing AI to shore up their cybersecurity defenses, but hackers are also using AI to find vulnerabilities and launch attacks.
Aware Force is aggressively adding AI-focused content to our cybersecurity news service, with employee readers telling their IT departments they appreciate the insight.
Check our Cybersecurity Newsletter page to get to know more or connect with us, and you'll see why organizations across the US and Canada use outstanding cybersecurity content — branded for them — delivered by Aware Force.