Connect with us

Security

The real future of healthcare is cultural change, not just AI and other technology

Published

on


(Image: Getty Images/iStockphoto)

“It actually is quite easy to be a futurist with regards to where are we going with health,” says Dr Ron Grenfell, director of health and biosecurity at CSIRO.

“It takes 15+ years to get evidence into practice,” he told the Commonwealth Bank’s Future of Health conference in Sydney last week. The “inertia of the system” will hold back the adoption of a lot of technology that’s being pitched as the future of health.

That, in your writer’s view, is one of the two big conceptual challenges at the heart of so many discussions of the digital transformation of healthcare. Vendors are pitching technologies like AI and chatbots to reduce the workload of humans, yet the healthcare sector is way behind the pace.

Dr Kevin Cheng is founder of Australian healthcare provider Osana. They use cloud communications provider 8×8 for their own needs, and use cloud-based medical records, but they run into the usual problems when communicating with other providers.

“I tried really hard not to buy a fax machine for our startup, but we failed,” Cheng said during a roundtable in Sydney last week, to much knowing laughter.

“When I talk to allied health and specialists, we’re often crossing IT barriers. It’s hard to get people on the phone to talk, so we’re very transactional … the other clinician could be sitting in a room next door, but we’re literally writing letters to each other and not talking,” he said.

Cheng believes Australia is lagging behind other high-tech nations. GPs in the US are now doing many of their consultations virtually, he said, whereas in Australian that generally only happens in remote locations.

“We’re having to create our own scorecards and dashboards in our own datasets, because there’s no reporting analytics that is on the market that fits our workflows,” Cheng said.

Phil Kernick, co-founder and chief technology officer of information security firm CQR Consulting, confirmed that belief.

“Nowadays doctors use computers for everything, and it doesn’t matter which industry you’re in, these are run badly. They’re run inefficiently. They’re run insecurely,” he said.

When it was “just” data, that didn’t matter so much. But software is now integrated into diagnostic and therapeutic devices, and if vendors are to be believed, AI will soon be taking control.

See: AI and the NHS: How artificial intelligence will change everything for patients and doctors

“I have a real concern that as everything moves to technology, and when we get into AI and machine learning something, we stop understanding how the technology works,” Kernick said.

We’re building systems that have a “very shaky foundation, and there are no regulations around this,” he said.

“If you look at the Therapeutic Goods Act, you look at how we regulate medical equipment, there are no software security standards. The information page actually says we take a risk-based approach, and use the same risk-based and safety-first approach to all systems, whether they include software or not. I mean, that’s just waffle. It doesn’t mean anything.”

Making patients the actual focus of healthcare

Cheng says that Orana’s strategy is to put the patient’s health at the centre of their business, focusing on prevention and outcomes, rather than the transactional fee-for-service treatment model.

“Patients are going to be consumers, so they’re our customers, and that means that we need to practice in a different way. We want to be partners with patients in their health and well-being,” he said, and data and apps will be part of that.

Dr Bertalan Mesko, director of The Medical Futurist Institute, says that the healthcare sector could and should go much further.

“By 2050 the most important change will be that patients will become the point of care,” he told the CommBank conference from Budapest. Not just becoming more engaged, or “empowered”, but the actual point of care and service delivery, using their own apps and devices to gather data, rather than travelling to medical facilities for diagnostic tests.

This isn’t so much a technological revolution, according to Mesko, but a cultural revolution. In your writer’s view, that’s the second big conceptual challenge.

“Since Hippocrates, for 2000 years, medicine has been quite straightforward. Medical professionals know everything, and they let patients come to them for help, they tell them what to do, and patients go home, and either they comply with what they were told or not. Usually half of them do, and half them don’t. That’s quite a bad success rate,” Mesko said.

Medical knowledge, and even the patient’s own data, were held in the medical professionals’ “ivory tower,” he said. But that’s changing.

See: VR, AR and the NHS: How virtual and augmented reality will change healthcare

“With crowdsourcing and crowdfunding, with Amazon and social media, with open access to medical papers, and all of these online communities out there, now patients can get access to the same resources,” Mesko said.

“The hierarchy of the doctor-patient relationship is transforming into an equal partnership.”

And sometimes patients race way ahead of their doctors. Diabetes patients, for example, have combined a continuous glucose monitor, an insulin pump, and a small computer such as a Raspberry Pi, to create what is in effect a do-it-yourself pancreas.

“Many of us have no medical or engineering training and we work on improvements in the evening or at the weekend, for free,” Dana Lewis, founder of the Open Artificial Pancreas System project, told the The Guardian in July.

“Commercial devices similar to ours are now being trialled and gradually coming on to the market: we’re happy to be helping companies to speed up development. The most important thing is that people don’t have to wait,” she said.

Governments and regulators “seem to be pretty terrified about these developments and technologies”, Mesko said.

“When patients find out that there’s a solution technologically for their health problem, they will not wait for regulators to come up with a solution. They will make those solutions themselves,” he said.

“It’s possible for a government to come up with a digital health policy — not just a healthcare policy or a health IT policy, those are different things — a digital health policy that focuses on the cultural aspects of the changes technologies initiate.”

It’s the Terminator scenario forever

This is not to say that the technology isn’t important. AI-powered chatbots can take care of routing patient interactions, for example, leaving the clinicians more time for managing and patient’s health.

According to Murray Brozinsky, chief strategy officer of Conversa Health, the company’s chatbots have saved Northwell Health some $3,400 per patient when they’ve been used to help manage patients after a hip or knee replacement surgery.

Rather than having a clinician call a patient every week to see how they’re doing, a chatbot can check in daily, or whenever the patient has a question. Using what Brozinsky prefers to call “augmented intelligence” any problems can be escalated more quickly.

Mesko, like many other medtech boosters, thinks AI will be the key technological change between now and 2050, but he says it’s important to be clear about what than means.

Artificial narrow intelligence is what we have now, in everything from a car’s braking system or Amazon’s recommendation engine.

Read: IoT and the NHS: Why the Internet of Things will create a healthcare revolution

Artificial general intelligence would mean having one algorithm with the cognitive ability of one human.

“We are far away from that,” Mesko said.

“And then we would have artificial superintelligence, meaning one algorithm would have the cognitive power of humanity, basically meaning that we are doomed. It’s the ‘Terminator’ scenario forever.”

“So I think we have to draw a line under which point it would be great to develop AI. It will be just before reaching artificial general intelligence.”

Related Coverage

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

Phish Fight: Securing Enterprise Communications

Published

on

Yes, much of the world may have moved on from email to social media and culturally dubious TikTok dances, yet traditional electronic mail remains a foundation of business communication. And sadly, it remains a prime vector for malware, data leakage, and phishing attacks that can undermine enterprise protections. It doesn’t have to be that way.

In a just released report titled “GigaOm Radar for Phishing Prevention and Detection,” GigaOm Analyst Simon Gibson surveyed more than a dozen enterprise-focused email security solutions. He found a range of approaches to securing communications that often can be fitted together to provide critical, defense-in-depth protection against even determined attackers.

Figure 1. GigaOm Radar for Email Phishing Prevention and Detection

“When evaluating these vendors and their solutions, it is important to consider your own business and workflow,” Gibson writes in the report, stressing the need to deploy solutions that best address your organization’s business workflow and email traffic. “For some it may be preferable to settle on one comprehensive solution, while for others building a best-of-breed architecture from multiple vendors may be preferable.”

In a field of competent solutions, Gibson found that Forcepoint, purchased recently by Raytheon, stood apart thanks to the layered protections provided by its Advanced Classification Engine. Area 1 and Zimperium, meanwhile, are both leaders that exhibit significant momentum, with Area 1 boosted by its recent solution partnership with Virtru, and Zimperium excelling in its deep commitment to mobile message security.

A mobile focus is timely, Gibson says in a video interview for GigaOm. He says companies are “tuning the spigot on” and enabling unprecedented access and reliance on mobile devices, which is creating an urgent need to get ahead of threats.

Gibson’s conclusion in the report? He singles out three things: Defense in depth, awareness of existing patterns and infrastructure, and a healthy respect for the “human factor” that can make security so hard to lock down.

Continue Reading

Security

When Is a DevSecOps Vendor Not a DevSecOps Vendor?

Published

on

DevOps’ general aim is to enable a more efficient process for producing software and technology solutions and bringing stakeholders together to speed up delivery. But we know from experience that this inherently creative, outcome-driven approach often forgets about one thing until too late in the process—security. Too often, security is brought into the timeline just before deployment, risking last minute headaches and major delays. The security team is pushed into being the Greek chorus of the process, “ruining everyone’s fun” by demanding changes and slowing things down.

But as we know, in the complex, multi-cloud and containerized environment we find ourselves in, security is becoming more important and challenging than ever. And the costs of security failure are not only measured in slower deployment, but in compliance breaches and reputational damage.

The term “DevSecOps” has been coined to characterize how security needs to be at the heart of the DevOps process. This is in part principle and part tools. As a principle, DevSecOps fits with the concept of “shifting left,” that is, ensuring that security is treated as early as possible in the development process. So far, so simple.

From a tooling perspective, however, things get more complicated, not least because the market has seen a number of platforms marketing themselves as DevSecOps. As we have been writing our Key Criteria report on the subject, we have learned that not all DevSecOps vendors are necessarily DevSecOps vendors. Specifically, we have learned to distinguish capabilities that directly enable the goals of DevSecOps from a process perspective, from those designed to support DevSecOps practices. We could define them as: “Those that do, and those that help.”

This is how to tell the two types of vendor apart and how to use them.

Vendors Enabling DevSecOps: “Tools That Do”

A number of tools work to facilitate the DevSecOps process -– let’s bite the bullet and call them DevSecOps tools. They help teams set out each stage of software development, bringing siloed teams together behind a unified vision that allows fast, high-quality development, with security considerations at its core. DevSecOps tools work across the development process, for example:

  • Create: Help to set and implement policy
  • Develop: Apply guidance to the process and aid its implementation
  • Test: Facilitate and guide security testing procedures
  • Deploy: Provide reports to assure confidence to deploy the application

The key element that sets these tool sets apart is the ability to automate and reduce friction within the development process. They will prompt action, stop a team from moving from one stage to another if the process has not adequately addressed security concerns, and guide the roadmap for the development from start to finish.

Supporting DevSecOps: “Tools That Help”

In this category we place those tools which aid the execution, and monitoring, of good DevSecOps principles. Security scanning and application/infrastructure hardening tools are a key element of these processes: Software composition analysis (SCA) forms a part of the development stage, static/dynamic application security testing (SAST/DAST) is integral to the test stage and runtime app protection (RASP) is a key to the Deploy stage.

Tools like this are a vital part of the security layer of security tooling, especially just before deployment – and they often come with APIs so they can be plugged into the CI/CD process. However, while these capabilities are very important to DevSecOps, they can be seen in more of a supporting role, rather than being DevSecOps tools per se.

DevSecOps-washing is not a good idea for the enterprise

While one might argue that security should never have been shifted right, DevSecOps exists to ensure that security best practices take place across the development lifecycle. A corollary exists to the idea of “tools that help,” namely that organizations implementing these tools are not “doing DevSecOps,” any more than vendors providing these tools are DevSecOps vendors.

The only way to “do” DevSecOps is to fully embrace security at a process management and governance level: This means assessing risk, defining policy, setting review gates, and disallowing progress for insecure deliverables. Organizations that embrace DevSecOps can get help from what we are calling DevSecOps tools, as well as from scanning and hardening tools that help support its goals.

At the end of the day, all security and governance boils down to risk: If you buy a scanning tool so you can check a box that says “DevSecOps,” you are potentially adding to your risk posture, rather than mitigating it. So, get your DevSecOps strategy fixed first, then consider how you can add automation, visibility, and control using “tools that do,” as well as benefit from “tools that help.”

Continue Reading

Security

High Performance Application Security Testing

Published

on

This free 1-hour webinar from GigaOm Research. It is hosted by an expert in Application and API testing, and GigaOm analyst, Jake Dolezal. His presentation will focus on the results of high performance testing we completed against two security mechanisms: ModSecurity on NGINX and NGINX App Protect. Additionally, we tested the AWS Web Application Firewall (WAF) as a fully managed security offering.

While performance is important, it is only one criterion for a Web Application Firewall selection. The results of the report are revealing about these platforms. The methodology will be shown with clarity and transparency on how you might replicate these tests to mimic your own workloads and requirements.

Register now to join GigaOm and sponsor NGINX for this free expert webinar.

Continue Reading

Trending