There is a debate going on in the technology world, i.e., is Artificial Intelligence (AI) our friend or foe? Undoubtedly, AI was launched with the hope that it can bring revolution to our work.
It is true that every day, new technologies are shaped under the shadow of AI to improve productivity by performing repetitive tasks.
However, the story is taking another route. Now, people are using AI in a menacing way.
Guy Carpenter published a report that stated that generative AI (GenAI) will lead to the development of polymorphic software. This tool has the capability to escape detection and hamper cybersecurity.
The report further commented that the higher this tool is shaped by AI, the more dangerous it will become.
The official report is, "AI enhancements to attack vectors will increase the efficacy and efficiency of attacks in the pre-intrusion phases of the cyber kill chain," allowing threat actors to target a larger number of victims more cost-effectively.
Based on the same notion, the UK's National Cyber Security Centre (NCSC) took a stand against using AI for cyberattacks. It warned that AI will help hackers to increase the volume of these cyberattacks and make the system weak.
Additionally, this organization further commented that "AI can enhance reconnaissance and social engineering tactics, making them more effective and harder to detect."
The report says that AI has the power to analyze exfiltrated data at a faster pace, and it can lead to more impactful cyberattacks.
The AI-driven cyberattacks could have a long-lasting impact that former Google CEO Eric Schmidt has had to express his thoughts. He has raised his concerns about AI becoming more powerful day by day and working autonomously, which literally means this technology is going away from human oversight.
He further stated that "Such developments could pose significant risks, emphasizing the need for systems to monitor and regulate AI technologies."
In the next part, he suggested that "Only AI can effectively police other AI systems, underscoring the complexity of managing advanced AI technologies."
Another AI expert, Rishabh Misra, expressed his concerns regarding this issue. He warned of highlighting the dangers of misconfiguration, malicious use, and the potential for AI to surpass human intelligence.
He further added that AI-powered bots could be used in disagreeable ways to spread misinformation and launch fake campaigns on social media. The list doesn't stop here! Hackers can leverage this technology to control vehicles or operate dangerous weapons.
Various industry experts are commenting on the usage of AI for the wrong purposes.
Michael Bruemmer, VP of Global Data Breach Resolution at Experian, officially said, "While supply chain breaches and ransomware dominated the cyber landscape in 2024, AI-related incidents will likely become a major headline maker in 2025."
He continued saying, "Investments in cybersecurity will increase to tackle this emerging threat while hackers are having a field day leveraging it for everything from phishing attacks and password cracking to producing malware and deepfakes."
Experian also published a report in which it stated that there will be a rise in AI-powered fraud and cyberattacks on data centers by 2025.
Furthermore, the AI technology will be able to duplicate government-issued identification, which will lead to further online thefts. Therefore, proper consideration for government IDs should be given as soon as possible.
Furthermore, AI will give rise to deepfakes in cybersecurity. Basically, deepfake technology can copy the entire structure of a person, such as voice, gestures, and images. After taking everything into consideration, the system can mimic that person to fool the cybersecurity.
A coin has two sides, and the same rule applies to AI technology. Artificial intelligence is undergoing negative effects; however, businesses can use the same system to stay away from cyberattacks.
Victor Benjamin, an assistant professor of information systems at Arizona State University, conveys his message regarding the need for a proactive stance in cybersecurity.
He suggests that "AI can be utilized to continuously monitor networks and identify potential threats in real-time, allowing for quicker responses to emerging threats." He also highlighted the importance of cybersecurity education in helping individuals recognize and respond to potential attacks.