broken image
broken image
GET IN TOUCH
  • HOME
  • SERVICES 
    • Educate and Alert
    • Secure By Design
    • Test and Certify
    • Automate
    • By Industry
  • STANDARDS & REGULATIONS 
    • ETSI EN 303 645
    • FDO IoT
    • IEC 62443
    • CC | EUCC
    • IoXt Alliance
    • FIDO
    • FIPS 140-3
    • EU Cloud Service
    • ISO 21434 & R155
    • EN 17640 | FITCEM | CSPN
    • CRA
    • RED-DA
    • MDR
    • SESIP
    • GSMA IoT
  • ABOUT US 
    • Who we are
    • EU Projects
    • They trust us
    • Careers
    • Knowledge
    • Contact
  • Blog & News 
    • Compliance & Regulations
    • Tech & Security
    • Industry Use Cases
    • Insights & Trends
    • Company News & PR
    • EU & Research Projects
  • …  
    • HOME
    • SERVICES 
      • Educate and Alert
      • Secure By Design
      • Test and Certify
      • Automate
      • By Industry
    • STANDARDS & REGULATIONS 
      • ETSI EN 303 645
      • FDO IoT
      • IEC 62443
      • CC | EUCC
      • IoXt Alliance
      • FIDO
      • FIPS 140-3
      • EU Cloud Service
      • ISO 21434 & R155
      • EN 17640 | FITCEM | CSPN
      • CRA
      • RED-DA
      • MDR
      • SESIP
      • GSMA IoT
    • ABOUT US 
      • Who we are
      • EU Projects
      • They trust us
      • Careers
      • Knowledge
      • Contact
    • Blog & News 
      • Compliance & Regulations
      • Tech & Security
      • Industry Use Cases
      • Insights & Trends
      • Company News & PR
      • EU & Research Projects
broken image
broken image
  • HOME
  • SERVICES 
    • Educate and Alert
    • Secure By Design
    • Test and Certify
    • Automate
    • By Industry
  • STANDARDS & REGULATIONS 
    • ETSI EN 303 645
    • FDO IoT
    • IEC 62443
    • CC | EUCC
    • IoXt Alliance
    • FIDO
    • FIPS 140-3
    • EU Cloud Service
    • ISO 21434 & R155
    • EN 17640 | FITCEM | CSPN
    • CRA
    • RED-DA
    • MDR
    • SESIP
    • GSMA IoT
  • ABOUT US 
    • Who we are
    • EU Projects
    • They trust us
    • Careers
    • Knowledge
    • Contact
  • Blog & News 
    • Compliance & Regulations
    • Tech & Security
    • Industry Use Cases
    • Insights & Trends
    • Company News & PR
    • EU & Research Projects
  • …  
    • HOME
    • SERVICES 
      • Educate and Alert
      • Secure By Design
      • Test and Certify
      • Automate
      • By Industry
    • STANDARDS & REGULATIONS 
      • ETSI EN 303 645
      • FDO IoT
      • IEC 62443
      • CC | EUCC
      • IoXt Alliance
      • FIDO
      • FIPS 140-3
      • EU Cloud Service
      • ISO 21434 & R155
      • EN 17640 | FITCEM | CSPN
      • CRA
      • RED-DA
      • MDR
      • SESIP
      • GSMA IoT
    • ABOUT US 
      • Who we are
      • EU Projects
      • They trust us
      • Careers
      • Knowledge
      • Contact
    • Blog & News 
      • Compliance & Regulations
      • Tech & Security
      • Industry Use Cases
      • Insights & Trends
      • Company News & PR
      • EU & Research Projects
GET IN TOUCH
broken image

Should We Start Certifying Cybersecurity for AI Solutions?

· Compliance and Regulations,Genereal Insights and Trends

AI technology, such as machine-learning and deep-learning techniques, is being advanced to counter sophisticated and destructive cyber-attacks. But because AI cybersecurity is an emerging field, experts worry about the potential new threats that may emerge if vulnerabilities in AI technology are exposed. Without a certifying body regulating AI technology for the use of cybersecurity, will organizations find themselves more at risk and victim to manipulation?

Developing Global Regulations for AI Systems 

On 21 April 2021, the European Commission (EC) published a proposal describing the “first-ever legal framework on AI”. Margrethe Vestager, Executive Vice President of the European Commission for A Europe Fit for the Digital Age, describes the landmark rules as a way for the EU to spearhead “the development of new global norms to make sure AI can be trusted.” Commissioner for Internal Market Thierry Breton adds that the new AI regulation “offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security.”

However, the potential of new risks emerging is not to be ignored. The Commission proposes requirements on strengthening AI systems, particularly those that may be used to bypass or manipulate human behavior. Some AI systems considered to pose the highest risk if manipulated include transportation infrastructures, education platforms, robot-assisted procedures, credit scoring, evidence evaluations, and document authentication.

Strengthening AI Systems to Maintain Accountability

According to Stefanie Lindstaedt, CEO of the Know-Center, a leading European research center for AI, “The potential of AI in Europe will only be exploited if the trustworthiness of data handling as well as fair, reliable and secure algorithms can be demonstrated.” 

Because AI security needs to be strengthened to mitigate risks and maintain accountability, experts are providing their views and providing recommendations. The Centre for European Policy Studies (CEPS) Task Force on AI and Cybersecurity proposed the following: 

  • Maintain and secure logs documenting the development and coding of AI systems
  • Track model parameters whenever machine learning is used
  • Cyber-secure pedigrees for software libraries linked to codes
  • Cyber-secure pedigrees for data libraries used for training machine-learning algorithms 
  • Proof demonstrating due diligence when testing AI technology 
  • Leverage techniques such as randomization, ensemble learning, and noise prevention to enhance AI reliability and reproducibility 
  • Make information available for audit of models and subsequent analysis, particularly at points of failure 
  • Allow system audits by devising methods that can also be carried out by trusted third parties.

On helping promote AI as a powerful solution in countering cyberattacks, few organizations have already invested in the development of methodologies and tools to bringing trust and value to customers and enable cybersecurity assessments that demonstrate they are secure and ethical to deploy.

Finally, compliance to standards and regulations are key to enabling trust in AI. If you wish to learn more about AI cybersecurity threats and their consequences or conduct conformity assessments reach out to a specialized and recognized lab in the field.

 

Subscribe
Previous
Top 10 Things You Should Know About German IT Security Label
Next
Top 7 Things You Should Know About Securing NFTs
 Return to site
Profile picture
Cancel
Cookie Use
We use cookies to improve browsing experience, security, and data collection. By accepting, you agree to the use of cookies for advertising and analytics. You can change your cookie settings at any time. Learn More
Accept all
Settings
Decline All
Cookie Settings
Necessary Cookies
These cookies enable core functionality such as security, network management, and accessibility. These cookies can’t be switched off.
Analytics Cookies
These cookies help us better understand how visitors interact with our website and help us discover errors.
Preferences Cookies
These cookies allow the website to remember choices you've made to provide enhanced functionality and personalization.
Save