From strategic decision-making to customer experience, artificial intelligence is gradually spreading within companies. What about its use for protection against cyber threats? Find out in the second part of this study conducted by Capterra. Answer questions frequently asked by customers using a chatbot, automate the creation and distribution of marketing video content, or even analyze a large volume of applications for a job offer.
These examples illustrate companies’ various uses of artificial intelligence (AI). If its potential has not escaped the attention of companies, it has also not gone unnoticed by cybercriminals. By relying on this technology, hackers carry out attacks that are increasingly difficult to detect by employees and traditional detection tools. Fear of threats fueled by AI is also one of the main concerns raised by companies for 2024, according to the results mentioned in the first part of our study (34% of those questioned).
Although artificial intelligence poses the risk of amplifying cyber threats, it also plays a significant role in protecting information and systems. It is emerging as an essential tool in the fight against current and future cyber threats by making it possible to detect, analyze, and respond more quickly to malicious attacks. To what extent do companies adopt these types of solutions? What advantages and challenges does the alliance of artificial intelligence and cybersecurity present?
What risks can be anticipated to facilitate the implementation of these solutions? Here are the themes covered in this second article. Our attention here focused on the responses of 1,393 employees, out of a total of 2,032 interviewed, who declared themselves responsible, involved, or fully informed of the security measures within their organization. A complete methodology is available at the end of this article.
Network, Email Security, And Threat Detection: Areas Of Cybersecurity Prioritized By Companies Investing In AI
An average cost of two billion euros in 2022 linked to successful cyberattacks (infiltration by phishing, ransomware): these are the financial losses of French companies, according to a study by the Asterès firm. Increasingly using AI to spot vulnerabilities in computer systems, bypass user access controls, and launch attacks such as more precise and convincing phishing emails and next-generation cybercrime is a significant risk to business activities.
The vulnerability of IoT networks (30%), internal threats coming from malicious or unintentional acts of employees (29%), or even phishing and social engineering attacks (34%) are also the main reasons that prompted them to opt for a specific means of defense: cybersecurity powered by
The acquisition of solutions combining AI and cybersecurity is one of the options favored by the companies in our panel to respond to the challenge of these cyber threats. For the 1049 companies that have chosen to turn to cybersecurity solutions driven by artificial intelligence, investment in some regions of cybersecurity has been prioritized, including:
- computer network security (46%),
- email security (38%),
- threat detection and analysis (36%).
Artificial intelligence is not new to the cybersecurity industry. For example, AI-powered technologies such as machine learning have been used since 2015 to identify risks that could compromise IT systems. As AI develops and new generation threats emerge, these tools tend to become more sophisticated and their use more widespread. But how are AI-powered security systems an improvement over traditional cybersecurity tools? We sought this in the following section by focusing on the responses provided by employees whose companies have invested in this type of solution.
Cybersecurity Driven By AI Compared To Traditional Cybersecurity: What Added Value?
By relying in particular on machine learning algorithms, artificial intelligence-based tools can analyze a significant amount of data in real time and thus more quickly identify patterns representing threats to systems. It is then possible for companies to detect potential security problems as soon as they occur, a significant advantage in the fight against cybercriminals. This point is also highlighted by 45% of companies using these solutions.
Another advantage of these tools noted by these same respondents compared to a traditional approach: that of behavioral analysis (40% of responses). AI can identify possible unusual actions because it can understand from data what the normal behavior of an organization’s systems and users is. Whether it’s an unrecognized device connection or an unusual surge of traffic on a site’s page, AI-driven systems can recognize these incidents and quickly initiate investigations. Alerts.
Finally, for 32% of the respondents concerned, the automation enabled by these solutions is also one of their main assets. As part of a traditional cybersecurity approach, it is often the responsibility of IT managers to carry out manual analysis of the data collected by the tools. Processing and sorting alerts, analyzing security logs, or even requesting database updates in light of a new threat: these tasks are some examples of those assigned to them.
By automating most of these areas, AI can free the teams concerned from certain time-consuming tasks. As participants’ responses illustrate, AI has real potential to improve business security. However, this technology also presents limitations and challenges despite its potential benefits.
Quality And Volume Of Data Processed, A Limitation Of AI In Cybersecurity, Raised By 35% Of The Structures Concerned
One of the main limitations of AI in cybersecurity mentioned by respondents is the need for more precision in its analyses, associated with the large volume of information generated (35%). Indeed, while AI algorithms can process large amounts of data and identify patterns, their analytical capacity relies on information on which they have been previously trained.
Recent examples of using ChatGPT for text generation demonstrate that AI can provide plausible albeit incorrect answers when faced with unknown data. Relying on raw AI-generated data without human verification can lead to imprecise or distorted conclusions. The large volume of processed data, likely to complicate the analysis of the results obtained, can add to this.
Among the other issues this part of the panel raises is the problem of enemy attacks (30%). This fragility of tools based on artificial intelligence is based on the possibility of hackers providing an AI-driven machine learning model with inaccurate primary data or even malicious data aimed at deceiving the model and encouraging people to make mistakes. In addition to these risks, “false positives” (27%) and “false negatives” (26%) also appear in the primary responses highlighted.
Unlike humans, AI does not have contextual awareness: it is easier to interpret certain events if the model has been informed beforehand. The AI can then flag a benign activity by the company as suspicious (false positive) or ignore a real threat (false negative), requiring additional attention from IT managers to sort this information. Finally, all of these factors can be correlated with another disadvantage: the lack of autonomy of the system, therefore requiring human expertise to supervise it (26%).
The complexity of artificial intelligence models and the need to interpret complex data often require talent with in-depth knowledge of cybersecurity and the intricacies of these technologies. In a context marked by difficulties linked to the retention and attraction of IT talents, this aspect can present a problem for certain companies. While artificial intelligence can help businesses across many industries make better strategic cybersecurity decisions, it is becoming increasingly clear that companies must recognize an essential factor to exploit its potential fully—potential: that of human intervention.
The Importance Of Balancing The Human Factor And The Use Of AI Within A Cybersecurity Strategy
Although AI has been developed to the point where it is entrusted with some important tasks within enterprise data protection, this technology still needs to reproduce the capabilities of observation, contextualization, and decision-making of the human mind. This is why the help of cybersecurity experts can be essential to the effectiveness of these solutions against cybercrime.
In what specific areas can the intervention of experts help a company benefit from the advantages of AI for the security of its operations? Firstly, the complexity of language and the use of intelligence requires expertise that may be lacking for employees unfamiliar with this technology. For 43% of companies, cybersecurity experts have a role to play here in raising awareness and training more junior employees, an important factor at a time when cyber threats continue to evolve.
An expert can also develop an understanding of a company’s operations, its regulatory framework, and the unique exceptions and risks it may face. 41% of companies perceive this human contextual understanding as essential, as this factor can help AI improve its decision-making and detection process. While AI has many advantages in data analysis and task automation, it is not free from the risks of approximation, errors, and biased judgments.
According to the organizations in our panel, expert supervision can help ensure systems are running smoothly (40%) and cybersecurity practices meet ethical (34%) and regulatory requirements. Collaboration between skilled cybersecurity employees and AI can help ensure a balanced strategic approach. Coupled with logic, contextual understanding, and sometimes necessary human intuition, the potential of AI can help an organization strengthen its defense against an ever-widening range of threats that could impact its operations.
Control And Knowledge Of Systems Are Crucial Factors For Adopting AI-Powered Cybersecurity Tools
Because it can automate many processes, identify threats in real time, and improve incident response times, AI has the potential to enhance traditional cybersecurity practices significantly. However, a lack of understanding of AI tools and their limitations can lead to failure to adopt artificial intelligence models and even present increased security risks for organizations.
To get the most out of these solutions in their cybersecurity strategy, an organization needs to analyze the risks associated with the existing limitations of these systems, ensure that they integrate with existing security protocols, and, therefore, benefit from the support of dedicated experts.