Connect with us

Security

Study shows programmers will take the easy way out and not implement proper password security

Published

on

Freelance developers need to be explicitly told to write code that stores passwords in a safe and secure manner, a recent study has revealed.

In an experiment that involved 43 programmers hired via the Freelancer.com platform, University of Bonn academics have discovered that developers tend to take the easy way out and write code that stores user passwords in an unsafe manner.

For their study, the German academics asked a group of 260 Java programmers to write a user registration system for a fake social network.

Of the 260 developers, only 43 took up the job, which involved using technologies such as Java, JSF, Hibernate, and PostgreSQL to create the user registration component.

Of the 43, academics paid half of the group with €100, and the other half with €200, to determine if higher pay made a difference in the implementation of password security features.

Further, they divided the developer group a second time, prompting half of the developers to store passwords in a secure manner, and leaving the other half to store passwords in their preferred method –hence forming four quarters of developers paid €100 and prompted to use a secure password storage method (P100), developers paid €200 and prompted to use a secure password storage method (P200), devs paid €100 but not prompted for password security (N100), and those paid €200 but not prompted for password security (N200).


Image: Naiakshina et al.

Researchers said developers took three days to submit their work, and that they had to ask 18 of the 43 to resubmit their code to include a password security system when they first sent a project that stored passwords in plaintext.

Of the 18 who had to resubmit their code, 15 developers were part of the group that were never told the user registration system needed to store password securely, showing that developers don’t inherently think about security when writing code.

Not-prompted results

Image: Naiakshina et al.

The other three were from the half that was told to use a secure method to store passwords, but who stored passwords in plaintext anyway.

Prompted results

Image: Naiakshina et al.

The results show that the level of understanding of what “secure passwords” mean differs greatly in the web development community.

Of the secure password storage systems developers chose to implement for this study, only the last two, SHA-256 and Bcrypt, are considered secure.

8 – Base64
10 – MD5
1 – SHA-1
3 – 3DES
3 – AES
5 – PBKDF2
1 – HMAC/SHA1
5 – SHA-256
7 – Bcrypt

The first, Base64, isn’t even an encryption algorithm, but an encoding function, something that the participating developers didn’t seem to know. Similarly for MD5, which is a hashing function.

“Many participants used hashing and encryption as synonyms,” the team of academics said in their research paper.

“Of the 18 participants who received the additional security request, 3 decided to use Base64 and argued, for example: ‘[I] encrypted it so the clear password is not visible’ and ‘It is very tough to decrypt’,” researcher said –highlighting that some study participants didn’t know the basic difference between an encryption algorithm and a function that just jumbles characters around.

Furthermore, only 15 of the 43 developers chose to implement salting, a process through which the encrypted password stored inside an application’s database is made harder to crack with the addition of a random data factor.

The study also found that 17 of the 43 developers copied their code from internet sites, suggesting that the freelancers didn’t have the necessary skills to develop a secure system from scratch, and chose to use code that might be outdated or even riddled with bugs.

Paying developers higher rates didn’t help considerably, researchers said.

However, the research team found that giving programmers specific instructions to implement a secure password storage system did yield better results than not saying anything at all and then expecting developers to think of security by themselves.

Nonetheless, without precise instructions, developers choose what they “believed” was a secure password storage system, but in reality, was not, suggesting that oversight from a professional is needed when designing any type of security system.

The study’s results clearly show that each freelance developer’s knowledge of cyber-security best practices varies wildly from person to person. This might be to outdated training or no training at all –yet again making a case against using developers without cyber-security experience for such jobs.

Attacks against encryption algorithms have been disclosed left and right in the past two decades, and something a developer might have learned in an outdated school manual might not stand scrutiny today. A good starting point for better password practices is this OWASP cheat sheet.

More details on this University of Bonn study are available in the research paper entitled “‘If you want, I can store the encrypted password.’ A Password-Storage Field Study with Freelance Developers.”

This study is a continuation of two similar studies –from 2017 and 2018— that used students as subjects, instead of freelance developers.

In the previous studies, students said they would have implemented secure password storage if they were creating code for a company.” The 2019 study showed that current developers aren’t any better than unsupervised students.

More cyber-security reports:



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

Key Criteria for Evaluating a Distributed Denial of Service (DDoS) Solution

Published

on

Although ransomware is making all the headlines today, it’s not the only kind of attack that can intrude between you and your customers. Distributed denial of service (DDoS) attacks, in which a target website is overwhelmed with spurious traffic, have become increasingly common.

Websites and online applications have become critical to how businesses communicate with their customers and partners. If those websites and applications are not available, there is a dollars and cents cost for businesses, both directly in business that is lost and indirectly through loss of reputation. It doesn’t matter to the users of the website whether the attacker has a political point to make, wants to hurt their victim financially, or is motivated by ego—if the website is unavailable, users will not be happy. Recent DDoS attacks have utilized thousands of compromised computers and they can involve hundreds of gigabits per second of attack bandwidth. A DDoS protection platform must inspect all of the traffic destined for the protected site and discard or absorb all of the hostile traffic while allowing legitimate traffic to reach the site.

Often the attack simply aims vast amounts of network traffic at the operating system under the application. These “volumetric” attacks usually occur at network Layer 3 or 4 and originate from compromised computers called bots. Few companies have enough internet bandwidth to mitigate this much of an attack on-premises, so DDoS protection needs to be distributed to multiple data centers around the world to be effective against these massive attacks. The sheer scale of infrastructure required means that most DDoS platforms are multi-tenant cloud services.

Other attacks target the application itself, at Layer 7, with either a barrage of legitimate requests or with requests carefully crafted to exploit faults in the site. These Layer 7 attacks look superficially like real requests and require careful analysis to separate them from legitimate traffic.

Attackers do not stand still. As DDoS protection platforms learn to protect against one attack method, attackers will find a new method to take down a website. So DDoS protection vendors don’t stand still either. Using information gathered from observing all of their protected sites, vendors are able to develop new techniques to protect their clients.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Cloud Data Security

Published

on

Data security has become an immutable part of the technology stack for modern applications. Protecting application assets and data against cybercriminal activities, insider threats, and basic human negligence is no longer an afterthought. It must be addressed early and often, both in the application development cycle and the data analytics stack.

The requirements have grown well beyond the simplistic features provided by data platforms, and as a result a competitive industry has emerged to address the security layer. The capabilities of this layer must be more than thorough, they must also be usable and streamlined, adding a minimum of overhead to existing processes.

To measure the policy management burden, we designed a reproducible test that included a standardized, publicly available dataset and a number of access control policy management scenarios based on real world use cases we have observed for cloud data workloads. We tested two options: Apache Ranger with Apache Atlas and Immuta. This study contrasts the differences between a largely role-based access control model with object tagging (OT-RBAC) to a pure attribute-based access control (ABAC) model using these respective technologies.

This study captures the time and effort involved in managing the ever-evolving access control policies at a modern data-driven enterprise. With this study, we show the impacts of data access control policy management in terms of:

  • Dynamic versus static
  • Scalability
  • Evolvability

In our scenarios, Ranger alone took 76x more policy changes than Immuta to accomplish the same data security objectives, while Ranger with Apache Atlas took 63x more policy changes. For our advanced use cases, Immuta only required one policy change each, while Ranger was not able to fulfill the data security requirement at all.

This study exposed the limitations of extending legacy Hadoop security components into cloud use cases. Apache Ranger uses static policies in an OT-RBAC model for the Hadoop ecosystem with very limited support for attributes. The difference between it and Immuta’s attribute-based access control model (ABAC) became clear. By leveraging dynamic variables, nested attributes, and global row-level policies and row-level security, Immuta can be quickly implemented and updated in comparison with Ranger.

Using Ranger as a data security mechanism creates a high policy-management burden compared to Immuta, as organizations migrate and expand cloud data use—which is shown here to provide scalability, clarity, and evolvability in a complex enterprise’s data security and governance needs.

The chart in Figure 1 reveals the difference in cumulative policy changes required for each platform configuration.

Figure 1. Difference in Cumulative Policy Changes

The assessment and scoring rubric and methodology is detailed in the report. We leave the issue of fairness for the reader to determine. We strongly encourage you, as the reader, to discern for yourself what is of value. We hope this report is informative and helpful in uncovering some of the challenges and nuances of data governance platform selection. You are encouraged to compile your own representative use cases and workflows and review these platforms in a way that is applicable to your requirements.

Continue Reading

Security

GigaOm Radar for Data Loss Prevention

Published

on

Data is at the core of modern business: It is our intellectual property, the lifeblood of our interactions with our employees, partners, and customers, and a true business asset. But in a world of increasingly distributed workforces, a growing threat from cybercriminals and bad actors, and ever more stringent regulation, our data is at risk and the impact of losing it, or losing access to it, can be catastrophic.

With this in mind, ensuring a strong data management and security strategy must be high on the agenda of any modern enterprise. Security of our data has to be a primary concern. Ensuring we know how, why, and where our data is used is crucial, as is the need to be sure that data does not leave the organization without appropriate checks and balances.

Keeping ahead of this challenge and mitigating the risk requires a multi-faceted approach. People and processes are key, as, of course, is technology in any data loss prevention (DLP) strategy.

This has led to a reevaluation of both technology and approach to DLP; a recognition that we must evolve an approach that is holistic, intelligent, and able to apply context to our data usage. DLP must form part of a broader risk management strategy.

Within this report, we evaluate the leading vendors who are offering solutions that can form part of your DLP strategy—tools that understand data as well as evaluate insider risk to help mitigate the threat of data loss. This report aims to give enterprise decision-makers an overview of how these offerings can be a part of a wider data security approach.

Continue Reading

Trending