Connect with us

Security

Google’s DeepMind asks what it means for AI to fail

Published

on

There’s been years of study placed in the problem of how to make artificial intelligence “robust” to attack and less prone to failure. Yet the field is still coming to grips with what failure in AI actually means, as pointed out by a blog post this week from the DeepMind unit of Google.

The missing element may seem obvious to some: it would really help if there was more human involvement in setting the boundary conditions for how neural networks are supposed to function.

Researchers Pushmeet Kohli, Sven Gowal, Krishnamurthy, Dvijotham, and Jonathan Uesato have been studying the problem, and they identify much work that remains to be done, which they sum up under the title “Towards Robust and Verified AI: Specification Testing, Robust Training, and Formal Verification.”

There’s a rich history of verification testing for computer programs, but those approaches are “not not suited for modern deep learning systems.” 

Also: MIT ups the ante in getting one AI to teach another 

Why? In large part because scientists are still learning about what it means for a neural network to follow the “specification” that was laid out for it. It’s not always clear what the specification even is.

“Specifications that capture ‘correct’ behavior in AI systems are often difficult to precisely state,” the authors write. 

Google’s DeepMind proposes ways to set a bound on the kinds of outputs a neural network can produce, to keep it from doing the wrong thing. 


DeepMind

The notion of a “specification” comes out of the software world, the DeepMind researchers observe. It is the intended functionality of a computer system. 

As the authors wrote in a post in December, in AI, there may not be just one spec, there may be at least three. There is the “ideal” specification, what the system’s creators imagine it could do. Then there is the “design” specification, the “objective function” explicitly optimized for a neural network. And, lastly, there is the “revealed” specification, the way that the thing actually performs. They call these three specs, which all can vary quite a bit from one another, the wish, the design, and the behavior. 

Designing artificial neural networks can be seen as how to close the gap between wish, design and behavior. As they wrote in the December essay, “A specification problem arises when there is a mismatch between the ideal specification and the revealed specification, that is, when the AI system doesn’t do what we’d like it to do. ”

Also: Google ponders the shortcomings of machine learning

They propose various routes to test and train neural networks that are more robust to errors, and presumably more faithful to specs. 

One approach is to use AI itself to figure out what befuddles AI. That means using a reinforcement learning system, like Google’s AlphaGo, to find the worst possible ways that another reinforcement learning system can fail? 

The authors did just that, in a paper published in December. “We learn an adversarial value function which predicts from experience which situations are most likely to cause failures for the agent.” The agent in this case refers to a reinforcement learning agent. 

“We then use this learned function for optimisation to focus the evaluation on the most problematic inputs.” They claim that the method leads to “large improvements over random testing” of reinforcement learning systems.

Must read


Another approach is to train a neural network to avoid a whole range of outputs, to keep it from going entirely off the rails and making really bad predictions. The authors claim that a “simple bounding technique,” something called “interval bound propagation,” is capable of training a “verifiably robust” neural network. That work won them a “best paper” award at the NeurIPS conference last year.

They’re now moving beyond just testing and training a neural network to avoid disaster, they’re also starting to find a theoretical basis for a guarantee of robustness. They approached it as an “optimisation problem that tries to find the largest violation of the property being verified.” 

Despite those achievements, at the end of the day, “much work is needed,” the authors write “to build automated tools for ensuring that AI systems in the real world will do the ‘right thing’.” 

Some of that work is to design algorithms that can test and train neural networks more intensely. But some of it probably involves a human element. It’s about setting the goals — the objective function — for AI that matches what humans want. 

“Building systems that can use partial human specifications and learn further specifications from evaluative feedback would be required,” they write, “as we build increasingly intelligent agents capable of exhibiting complex behaviors and  acting in unstructured environments.”

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related stories:

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

GigaOm Radar for Security Orchestration, Automation, and Response (SOAR)

Published

on

Security Orchestration, Automation, and Response (SOAR) emerged as a product category in the mid-2010s. At that point, SOAR solutions were very much an automation and orchestration engine based on playbooks and integrations. Since then, the platforms have developed beyond the initial core SOAR capabilities to offer more holistic experiences to security analysts, with the aim of developing SOAR as the main workspace for practitioners.

Newer features offered by this holistic experience include case management, collaboration, simulations, threat enrichment, and visual correlations. Additionally, SOAR vendors have gradually implemented artificial intelligence (AI) and machine learning (ML) technologies to enable their platforms to learn from past events and fine-tune existing processes. This is where evolving threat categorization and autonomous improvement become differentiators in the space. While these two metrics are not critical for a SOAR platform, they may offer advantages in terms of reduced mean time to resolution (MTTR), resilience against employee turnover, and overall flexibility.

We’ve observed a lot of acquisition activity in the SOAR space. This was to be expected considering that, after 2015, a sizable number of pure-play SOAR vendors entered the market. Larger players with a wider security portfolio are acquiring these SOAR-specific vendors in order to enter the automation and orchestration market. We expect to see more SOAR acquisitions as the security tools converge, very likely into next-generation Security Information & Event Management products and services (SIEMs).

SIEM is a great candidate for a central management platform for security activities. It was designed to be a single source of truth, an aggregator of multiple security logs, but has been limited historically in its ability to carry out actions. In the past few years, however, SIEMs have either started developing their own automation and orchestration engines or integrated with third-party SOAR vendors. Through a number of acquisitions and developments, multiple players with wider security portfolios have begun to offer SOAR capabilities natively as part of other security solutions.

Going forward, we expect SOAR solutions to be further integrated into other products. This will include not only SIEM, but also solutions such as Extended Detection and Response (XDR) and IT automation. The number of pure-play SOAR vendors is unlikely to increase, although a handful may remain as fully agnostic solutions that enterprises can leverage in instances when their existing next-generation SIEM platforms do not meet all their use cases. However, for pure-play SOAR vendors to remain competitive, they will need to either expand into other security areas or consistently outperform their integrated counterparts.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

The post GigaOm Radar for Security Orchestration, Automation, and Response (SOAR) appeared first on Gigaom.

Continue Reading

Security

GigaOm Radar for Disaster Recovery as a Service (DRaaS)

Published

on

Very few organizations see disaster recovery (DR) for their IT systems as a business differentiator, so they often prefer to outsource the process and consume it as a service (DRaaS) that’s billed monthly. There are many DRaaS providers with varying backgrounds, whose services are often shaped by that background. Products that started as customer-managed DR applications tend to have the most mature orchestration and automation, but vendors may face challenges transforming their application into a consumable service. Backup as a Service (BaaS) providers typically have great consumption models and off-site data protection, but they might be lacking in rich orchestration for failover. Other DRaaS providers come from IaaS backgrounds, with well-developed, on-demand resource deployment for recovery and often a broader platform with automation capabilities.

Before you invest in a DRaaS solution, you should attempt to be clear on what you see as its value. If your motivation is simply not to operate a recovery site, you probably want a service that uses technology similar to what you’re using at the protected site. If the objective is to spend less effort on DR protection, you will be less concerned about similarity and more with simplicity. And if you want to enable regular and granular testing of application recovery with on-demand resources, advanced failover automation and sandboxing will be vital features.

Be clear as well on the scale of disaster you are protecting against. On-premises recovery will protect against shared component failure in your data center. A DRaaS location in the same city will allow a lower RPO and provide lower latency after failover, but might be affected by the same disaster as your on-premises data center. A more distant DR location would be immune to your local disaster, but what about the rest of your business? It doesn’t help to have operational IT in another city if your only factory is under six feet of water.

DR services are designed to protect enterprise application architectures that are centered on VMs with persistent data and configuration. A lift-and-shift cloud adoption strategy leads to enterprise applications in the cloud, requiring cloud-to-cloud DR that is very similar to DRaaS from on-premises. Keep in mind, however, that cloud-native applications have different DR requirements.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

The post GigaOm Radar for Disaster Recovery as a Service (DRaaS) appeared first on Gigaom.

Continue Reading

Security

GigaOm Radar for DDoS Protection

Published

on

With ransomware getting all the news coverage when it comes to internet threats, it is easy to lose sight of distributed denial of service (DDoS) attacks even as these attacks become more frequent and aggressive. In fact, the two threats have recently been combined in a DDoS ransom attack, in which a company is hit with a DDoS and then a ransom demanded in exchange for not launching a larger DDoS. Clearly, a solid mechanism for thwarting such attacks is needed, and that is exactly what a good DDoS protection product will include. This will allow users, both staff and customers, to access their applications with no indication that a DDoS attack is underway. To achieve this, the DDoS protection product needs to know about your applications and, most importantly, have the capability to absorb the massive bandwidth generated by botnet attacks.

All the DDoS protection vendors we evaluated have a cloud-service element in their products. The scale-out nature of cloud platforms is the right response to the scale-out nature of DDoS attacks using botnets, thousands of compromised computers, and/or embedded devices. A DDoS protection network that is larger, faster, and more distributed will defend better against larger DDoS attacks.

Two public cloud platforms we review have their own DDoS protection, both providing it for applications running on their public cloud and offering only cloud-based protection. We also look at two content delivery networks (CDNs) that offer only cloud-based protection but also have a large network of locations for distributed protection. Many of the other vendors offer both on-premises and cloud-based services that are integrated to provide unified protection against the various attack vectors that target the network and application layers.

Some of the vendors have been protecting applications since the early days of the commercial internet. These vendors tend to have products with strong on-premises protection and integration with a web application firewall or application delivery capabilities. These companies may not have developed their cloud-based protections as fully as the born-in-the-cloud DDoS vendors.

In the end, you need a DDoS protection platform equal to the DDoS threat that faces your business, keeping in mind that such threats are on the rise.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Trending