Return to site
Should We Start Certifying Cybersecurity for AI Solutions?
February 4, 2022

AI technology, such as machine-learning and deep-learning techniques, is being advanced to counter sophisticated and destructive cyber-attacks. But because AI cybersecurity is an emerging field, experts worry about the potential new threats that may emerge if vulnerabilities in AI technology are exposed. Without a certifying body regulating AI technology for the use of cybersecurity, will organizations find themselves more at risk and victim to manipulation?

Developing Global Regulations for AI Systems 

On 21 April 2021, the European Commission (EC) published a proposal describing the “first-ever legal framework on AI”. Margrethe Vestager, Executive Vice President of the European Commission for A Europe Fit for the Digital Age, describes the landmark rules as a way for the EU to spearhead “the development of new global norms to make sure AI can be trusted.” Commissioner for Internal Market Thierry Breton adds that the new AI regulation “offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security.”

However, the potential of new risks emerging is not to be ignored. The Commission proposes requirements on strengthening AI systems, particularly those that may be used to bypass or manipulate human behavior. Some AI systems considered to pose the highest risk if manipulated include transportation infrastructures, education platforms, robot-assisted procedures, credit scoring, evidence evaluations, and document authentication.

Strengthening AI Systems to Maintain Accountability

According to Stefanie Lindstaedt, CEO of the Know-Center, a leading European research center for AI, “The potential of AI in Europe will only be exploited if the trustworthiness of data handling as well as fair, reliable and secure algorithms can be demonstrated.” 

Because AI security needs to be strengthened to mitigate risks and maintain accountability, experts are providing their views and providing recommendations. The Centre for European Policy Studies (CEPS) Task Force on AI and Cybersecurity proposed the following: 

  • Maintain and secure logs documenting the development and coding of AI systems
  • Track model parameters whenever machine learning is used
  • Cyber-secure pedigrees for software libraries linked to codes
  • Cyber-secure pedigrees for data libraries used for training machine-learning algorithms 
  • Proof demonstrating due diligence when testing AI technology 
  • Leverage techniques such as randomization, ensemble learning, and noise prevention to enhance AI reliability and reproducibility 
  • Make information available for audit of models and subsequent analysis, particularly at points of failure 
  • Allow system audits by devising methods that can also be carried out by trusted third parties.

On helping promote AI as a powerful solution in countering cyberattacks, few organizations have already invested in the development of methodologies and tools to bringing trust and value to customers and enable cybersecurity assessments that demonstrate they are secure and ethical to deploy.

Finally, compliance to standards and regulations are key to enabling trust in AI. If you wish to learn more about AI cybersecurity threats and their consequences or conduct conformity assessments reach out to a specialized and recognized lab in the field.