There’s been years of study placed in the problem of how to make artificial intelligence “robust” to attack and less prone to failure. Yet the field is still coming to grips with what failure in AI actually means, as pointed out by a blog post this week from the DeepMind unit of Google.
The missing element may seem obvious to some: it would really help if there was more human involvement in setting the boundary conditions for how neural networks are supposed to function.
Researchers Pushmeet Kohli, Sven Gowal, Krishnamurthy, Dvijotham, and Jonathan Uesato have been studying the problem, and they identify much work that remains to be done, which they sum up under the title “Towards Robust and Verified AI: Specification Testing, Robust Training, and Formal Verification.”
There’s a rich history of verification testing for computer programs, but those approaches are “not not suited for modern deep learning systems.”
Also: MIT ups the ante in getting one AI to teach another
Why? In large part because scientists are still learning about what it means for a neural network to follow the “specification” that was laid out for it. It’s not always clear what the specification even is.
“Specifications that capture ‘correct’ behavior in AI systems are often difficult to precisely state,” the authors write.
The notion of a “specification” comes out of the software world, the DeepMind researchers observe. It is the intended functionality of a computer system.
As the authors wrote in a post in December, in AI, there may not be just one spec, there may be at least three. There is the “ideal” specification, what the system’s creators imagine it could do. Then there is the “design” specification, the “objective function” explicitly optimized for a neural network. And, lastly, there is the “revealed” specification, the way that the thing actually performs. They call these three specs, which all can vary quite a bit from one another, the wish, the design, and the behavior.
Designing artificial neural networks can be seen as how to close the gap between wish, design and behavior. As they wrote in the December essay, “A specification problem arises when there is a mismatch between the ideal specification and the revealed specification, that is, when the AI system doesn’t do what we’d like it to do. ”
Also: Google ponders the shortcomings of machine learning
They propose various routes to test and train neural networks that are more robust to errors, and presumably more faithful to specs.
One approach is to use AI itself to figure out what befuddles AI. That means using a reinforcement learning system, like Google’s AlphaGo, to find the worst possible ways that another reinforcement learning system can fail?
The authors did just that, in a paper published in December. “We learn an adversarial value function which predicts from experience which situations are most likely to cause failures for the agent.” The agent in this case refers to a reinforcement learning agent.
“We then use this learned function for optimisation to focus the evaluation on the most problematic inputs.” They claim that the method leads to “large improvements over random testing” of reinforcement learning systems.
Another approach is to train a neural network to avoid a whole range of outputs, to keep it from going entirely off the rails and making really bad predictions. The authors claim that a “simple bounding technique,” something called “interval bound propagation,” is capable of training a “verifiably robust” neural network. That work won them a “best paper” award at the NeurIPS conference last year.
They’re now moving beyond just testing and training a neural network to avoid disaster, they’re also starting to find a theoretical basis for a guarantee of robustness. They approached it as an “optimisation problem that tries to find the largest violation of the property being verified.”
Despite those achievements, at the end of the day, “much work is needed,” the authors write “to build automated tools for ensuring that AI systems in the real world will do the ‘right thing’.”
Some of that work is to design algorithms that can test and train neural networks more intensely. But some of it probably involves a human element. It’s about setting the goals — the objective function — for AI that matches what humans want.
“Building systems that can use partial human specifications and learn further specifications from evaluative feedback would be required,” they write, “as we build increasingly intelligent agents capable of exhibiting complex behaviors and acting in unstructured environments.”
Previous and related coverage:
What is AI? Everything you need to know
An executive guide to artificial intelligence, from machine learning and general AI to neural networks.
What is deep learning? Everything you need to know
The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.
What is machine learning? Everything you need to know
This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.
What is cloud computing? Everything you need to know about
An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.
CISO Podcast: Talking Anti-Phishing Solutions
Simon Gibson earlier this year published the report, “GigaOm Radar for Phishing Prevention and Detection,” which assessed more than a dozen security solutions focused on detecting and mitigating email-borne threats and vulnerabilities. As Gibson noted in his report, email remains a prime vector for attack, reflecting the strategic role it plays in corporate communications.
Earlier this week, Gibson’s report was a featured topic of discussions on David Spark’s popular CISO Security Vendor Relationship Podcast. In it, Spark interviewed a pair of chief information security officers—Mike Johnson, CISO for SalesForce, and James Dolph, CISO for Guidewire Software—to get their take on the role of anti-phishing solutions.
“I want to first give GigaOm some credit here for really pointing out the need to decide what to do with detections,” Johnson said when asked for his thoughts about selecting an anti-phishing tool. “I think a lot of companies charge into a solution for anti-phishing without thinking about what they are going to do when the thing triggers.”
As Johnson noted, the needs and vulnerabilities of a large organization aligned on Microsoft 365 are very different from those of a smaller outfit working with GSuite. A malicious Excel macro-laden file, for example, poses a credible threat to a Microsoft shop and therefore argues for a detonation solution to detect and neutralize malicious payloads before they can spread and morph. On the other hand, a smaller company is more exposed to business email compromise (BEC) attacks, since spending authority is often spread among many employees in these businesses.
Gibson’s radar report describes both in-line and out-of-band solutions, but Johnson said cloud-aligned infrastructures argue against traditional in-line schemes.
“If you put an in-line solution in front of [Microsoft] 365 or in front of GSuite, you are likely decreasing your reliability, because you’ve now introduced this single point of failure. Google and Microsoft have this massive amount of reliability that is built in,” Johnson said.
So how should IT decision makers go about selecting an anti-phishing solution? Dolph answered that question with a series of questions of his own:
“Does it nail the basics? Does it fit with the technologies we have in place? And then secondarily, is it reliable, is it tunable, is it manageable?” he asked. “Because it can add a lot overhead, especially if you have a small team if these tools are really disruptive to the email flow.”
Dolph concluded by noting that it’s important for solutions to provide insight that can help organizations target their protections, as well as support both training and awareness around threats. Finally, he urged organizations to consider how they can measure the effectiveness of solutions.
“I may look at other solutions in the future and how do I compare those solutions to the benchmark of what we have in place?”
Listen to the Podcast: CISO Podcast
Phish Fight: Securing Enterprise Communications
Yes, much of the world may have moved on from email to social media and culturally dubious TikTok dances, yet traditional electronic mail remains a foundation of business communication. And sadly, it remains a prime vector for malware, data leakage, and phishing attacks that can undermine enterprise protections. It doesn’t have to be that way.
In a just released report titled “GigaOm Radar for Phishing Prevention and Detection,” GigaOm Analyst Simon Gibson surveyed more than a dozen enterprise-focused email security solutions. He found a range of approaches to securing communications that often can be fitted together to provide critical, defense-in-depth protection against even determined attackers.
Figure 1. GigaOm Radar for Email Phishing Prevention and Detection
“When evaluating these vendors and their solutions, it is important to consider your own business and workflow,” Gibson writes in the report, stressing the need to deploy solutions that best address your organization’s business workflow and email traffic. “For some it may be preferable to settle on one comprehensive solution, while for others building a best-of-breed architecture from multiple vendors may be preferable.”
In a field of competent solutions, Gibson found that Forcepoint, purchased recently by Raytheon, stood apart thanks to the layered protections provided by its Advanced Classification Engine. Area 1 and Zimperium, meanwhile, are both leaders that exhibit significant momentum, with Area 1 boosted by its recent solution partnership with Virtru, and Zimperium excelling in its deep commitment to mobile message security.
A mobile focus is timely, Gibson says in a video interview for GigaOm. He says companies are “tuning the spigot on” and enabling unprecedented access and reliance on mobile devices, which is creating an urgent need to get ahead of threats.
Gibson’s conclusion in the report? He singles out three things: Defense in depth, awareness of existing patterns and infrastructure, and a healthy respect for the “human factor” that can make security so hard to lock down.
When Is a DevSecOps Vendor Not a DevSecOps Vendor?
DevOps’ general aim is to enable a more efficient process for producing software and technology solutions and bringing stakeholders together to speed up delivery. But we know from experience that this inherently creative, outcome-driven approach often forgets about one thing until too late in the process—security. Too often, security is brought into the timeline just before deployment, risking last minute headaches and major delays. The security team is pushed into being the Greek chorus of the process, “ruining everyone’s fun” by demanding changes and slowing things down.
But as we know, in the complex, multi-cloud and containerized environment we find ourselves in, security is becoming more important and challenging than ever. And the costs of security failure are not only measured in slower deployment, but in compliance breaches and reputational damage.
The term “DevSecOps” has been coined to characterize how security needs to be at the heart of the DevOps process. This is in part principle and part tools. As a principle, DevSecOps fits with the concept of “shifting left,” that is, ensuring that security is treated as early as possible in the development process. So far, so simple.
From a tooling perspective, however, things get more complicated, not least because the market has seen a number of platforms marketing themselves as DevSecOps. As we have been writing our Key Criteria report on the subject, we have learned that not all DevSecOps vendors are necessarily DevSecOps vendors. Specifically, we have learned to distinguish capabilities that directly enable the goals of DevSecOps from a process perspective, from those designed to support DevSecOps practices. We could define them as: “Those that do, and those that help.”
This is how to tell the two types of vendor apart and how to use them.
Vendors Enabling DevSecOps: “Tools That Do”
A number of tools work to facilitate the DevSecOps process -– let’s bite the bullet and call them DevSecOps tools. They help teams set out each stage of software development, bringing siloed teams together behind a unified vision that allows fast, high-quality development, with security considerations at its core. DevSecOps tools work across the development process, for example:
- Create: Help to set and implement policy
- Develop: Apply guidance to the process and aid its implementation
- Test: Facilitate and guide security testing procedures
- Deploy: Provide reports to assure confidence to deploy the application
The key element that sets these tool sets apart is the ability to automate and reduce friction within the development process. They will prompt action, stop a team from moving from one stage to another if the process has not adequately addressed security concerns, and guide the roadmap for the development from start to finish.
Supporting DevSecOps: “Tools That Help”
In this category we place those tools which aid the execution, and monitoring, of good DevSecOps principles. Security scanning and application/infrastructure hardening tools are a key element of these processes: Software composition analysis (SCA) forms a part of the development stage, static/dynamic application security testing (SAST/DAST) is integral to the test stage and runtime app protection (RASP) is a key to the Deploy stage.
Tools like this are a vital part of the security layer of security tooling, especially just before deployment – and they often come with APIs so they can be plugged into the CI/CD process. However, while these capabilities are very important to DevSecOps, they can be seen in more of a supporting role, rather than being DevSecOps tools per se.
DevSecOps-washing is not a good idea for the enterprise
While one might argue that security should never have been shifted right, DevSecOps exists to ensure that security best practices take place across the development lifecycle. A corollary exists to the idea of “tools that help,” namely that organizations implementing these tools are not “doing DevSecOps,” any more than vendors providing these tools are DevSecOps vendors.
The only way to “do” DevSecOps is to fully embrace security at a process management and governance level: This means assessing risk, defining policy, setting review gates, and disallowing progress for insecure deliverables. Organizations that embrace DevSecOps can get help from what we are calling DevSecOps tools, as well as from scanning and hardening tools that help support its goals.
At the end of the day, all security and governance boils down to risk: If you buy a scanning tool so you can check a box that says “DevSecOps,” you are potentially adding to your risk posture, rather than mitigating it. So, get your DevSecOps strategy fixed first, then consider how you can add automation, visibility, and control using “tools that do,” as well as benefit from “tools that help.”
2021 Volvo XC60 T8 Polestar Review – A hotter plug-in hybrid
Polestar may be occupied making all-electric cars right now, but that doesn’t mean it’s too busy to give co-owner Volvo...
Report: This year’s iPhones may have in-screen Touch ID
Enlarge / The iPhone 12 and 12 Pro. The next iPhones aren’t expected to change looks very much. Samuel Axon...
Cryptocat author gets insanely fast backing to build P2P tech for social media – TechCrunch
The idea for Capsule started with a tweet about reinventing social media. A day later cryptography researcher, Nadim Kobeissi —...
Study: Solitary electric eels sometimes hunt in groups with synchronized zaps
Volta’s electric eels can gather in groups, working together to corral smaller fish in shallower waters, a new study finds....
Samsung’s top executive gets 30 months in prison for bribery
Enlarge / Lee Jae-yong, vice chairman of Samsung, seen here leaving a court hearing in January 2017. Chung Sung-Jun/Getty Images...
Social12 months ago
CrashPlan for Small Business Review
Gadgets2 years ago
A fictional Facebook Portal videochat with Mark Zuckerberg – TechCrunch
Mobile2 years ago
Memory raises $5M to bring AI to time tracking – TechCrunch
Social2 years ago
iPhone XS priciest yet in South Korea
Cars2 years ago
What’s the best cloud storage for you?
Security2 years ago
Google latest cloud to be Australian government certified
Social2 years ago
Apple’s new iPad Pro aims to keep enterprise momentum
Cars2 years ago
Some internet outages predicted for the coming month as ‘768k Day’ approaches