Facebook said today the number of users who had their Facebook authentication tokens stolen in a security breach that took place last month is actually 30 million, and not 50 million, as the company initially announced.
Attackers stole authentication tokens for these 30 million accounts, but they also stole additional data for 29 million, Facebook said.
- For 15 million users, attackers harvested name and contact details (phone number, email, or both, depending on what people had on their profiles).
- For 14 million users, attackers harvested the same info as above, plus username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches.
- For 1 million, attackers only collected access tokens.
The social network said it’s working with the FBI to identify the attackers, and could not reveal additional information about the source of the attacks.
But while answering questions in a phone conference today, Guy Rosen, Facebook’s VP of Product Management, said Facebook did not identify attempts to use any of the stolen tokens.
Even if the attackers had tried to use the tokens, they wouldn’t have worked, Rosen said, the reason being that Facebook had invalidated all the stolen tokens on September 28.
Rosen also said Facebook did not find any evidence suggesting the tokens were used with the Facebook Login feature either, which would have allowed the attacker to log into third-party apps via Facebook tokens.
The Facebook exec also went into more details on how the attack unfolded. He said attackers initially used accounts under their direct control, which they had likely created, to exploit the vulnerability in the “View As” feature and steal tokens for the friends of those original accounts. They then used the same vulnerability over and over again until they gathered tokens for around 400,000 accounts, which Rosen referred to as “seed accounts.”
Once they had the tokens for the seed accounts, Rosen said the attackers used the tokens to access the 400,000 accounts and deployed scripts to harvest even more tokens at a larger and automated scale.
This action triggered a massive traffic spike, which Facebook engineers detected on September 16, and following investigations into the source of the traffic concluded it was a coordinated attack on September 26, patched the View As vulnerability on September 27, and went public with the breach on September 28.
“In the coming days, we’ll send customized messages to the 30 million people affected to explain what information the attackers might have accessed, as well as steps they can take to help protect themselves, including from suspicious emails, text messages, or calls,” Rosen added separately, in a blog post.
Mockups of those messages are available below. Until then, Facebook also launched a Help Center page where everyone can go and see if they’re one of the 30 million unlucky users who had their token stolen.
Previous and related coverage
Achieve more with GigaOm
As we have grown substantially over the past two years. We are often asked who (even) is GigaOm, what the company does, how it differentiates, and so on. These are fair questions—many people still remember what we can call GigaOm 1.0, that fine media company born of the blogging wave.
We’ve been through the GigaOm 2.0 “boutique analyst firm” phase, before deciding we wanted to achieve more. That decision put us on a journey to where we are today, ten times the size in terms of headcount and still growing, and covering as many technology categories as the biggest analyst firms.
Fuelling our growth has been a series of interconnected decisions. First, we asked technology decision-makers —CIOs, CTOs, VPs of Engineering and Operations, and so on—what they needed, and what was missing: unanimously, they said they needed strategic technical information based on practical experience, that is, not just theory. Industry analysts, it has been said, can be like music critics who have never played in an orchestra. Sure, there’s a place for that, but it leaves a gap for practitioner-led insights.
Second, and building on this, we went through a test-and-learn phase to try various report models. Enrico Signoretti, now our VP of Product, spearheaded the creation of the Key Criteria and Radar document pair, based on his experience in evaluating solutions for enterprise clients. As we developed this product set in collaboration with end-user strategists, we doubled down on the Key Criteria report as a how-to guide for writing a Request For Proposals.
Doing this led to the third strand, expanding this thinking to the enterprise decision-making cycle. Technology decision-makers don’t wake up one morning and say, “I think I need some Object Storage.”
Rather, they will be faced with a challenge, a situation, or some other scenario – perhaps existing storage products are not scaling sufficiently, applications are being rationalized, or a solution has reached the end of life. These scenarios dictate a nhttps://gigaom.com/end-user-products/btis/eed: often, the decision maker will not only need to define a response but will also then have to justify the spending.
This reality dictates the first product in the GigaOm portfolio, the GigaBrief, which is (essentially) a how-to guide for writing a business case. Once the decision maker has confirmed the budget, they can get on with writing an RFP (cf the Key Criteria and Radar), and then consider running a proof of concept (PoC).
We have a how-to guide for these as well, based on our Benchmarks, field tests, and Business Technology Impact (BTI) reports. We know that, alongside thought leadership, decision-makers need hard numbers for costs and benefits, so we double down on these.
For end-user organizations, our primary audience, we have therefore created a set of tools to make decisions and unblock deployments: our subscribers come to us for clarity and practitioner-led advice, which helps them work both faster and smarter and achieve their goals more effectively. Our research is high-impact by design, which is why we have an expanding set of partner organizations using it to enable their clients.
Specifically, learning companies such as Pluralsight and A Cloud Guru use GigaOm reports helping subscribers set direction and lock down the solutions they need to deliver. By its nature, our how-to approach to report writing has created a set of strategic training tools, which directly feed more specific technical training.
Meanwhile, channel companies such as Ingram Micro and Transformation Continuum use our research to help their clients lock down the solutions they need, together with a practitioner-led starting point for supporting frameworks, architectures, and structures. And we work together with media partners like The Register and The Channel Company to support their audiences with research and insights.
Technology vendors, too, benefit from end-user decision makers that are better equipped to make decisions. Rather than generic market making or long-listing potential vendors, our scenario-led materials directly impact buying decisions, taking procurement from a shortlist to a conclusion. Sales teams at systems, service, and software companies tell us how they use our reports when discussing options with prospects, not to evangelize but to explore practicalities and help reach a conclusion.
All these reasons and more enable us to say with confidence how end-user businesses, learning, channel and media companies, and indeed technology vendors are achieving more with GigaOm research. In a complex and constantly evolving landscape, our practitioner- and scenario-led approach brings specificity and clarity, helping organizations reach further, work faster and deliver more.
Our driving force is the value we bring; at the same time, we maintain a connection with our media heritage, which enables us to scale beyond traditional analyst models. We also continue to learn, reflect, and change — our open and transparent model welcomes feedback from all stakeholders so that we can drive improvements in our products, our approach, and our outreach.
This is to say, if you have any thoughts, questions, raves, or rants, don’t hesitate to get in touch with me directly. My virtual door, and my calendar, are always open.
The post Achieve more with GigaOm appeared first on GigaOm.
Pragmatic view of Zero Trust
Traditionally we have taken the approach that we trust everything in the network, everything in the enterprise, and put our security at the edge of that boundary. Pass all of our checks and you are in the “trusted” group. That worked well when the opposition was not sophisticated, most end user workstations were desktops, the number of remote users was very small, and we had all our servers in a series of data centers that we controlled completely, or in part. We were comfortable with our place in the world, and the things we built. Of course, we were also asked to do more with less and this security posture was simple and less costly than the alternative.
Starting around the time of Stuxnet this started to change. Security went from a poorly understood, accepted cost, and back room discussion to one being discussed with interest in board rooms and at shareholder meetings. Overnight the executive level went from being able to be ignorant of cybersecurity to having to be knowledgable of the company’s disposition on cyber. Attacks increased, and the major news organizations started reporting on cyber incidents. Legislation changed to reflect this new world, and more is coming. How do we handle this new world and all of its requirements?
Zero Trust is that change in security. Zero Trust is a fundamental change in cybersecurity strategy. Whereas before we focused on boundary control and built all our security around the idea of inside and outside, now we need to focus on every component and every person potentially being a Trojan Horse. It may look legitimate enough to get through the boundary, but in reality it could be hosting a threat actor waiting to attack. Even better, your applications and infrastructure could be a time bomb waiting to blow, where the code used in those tools is exploited in a “Supply Chain” attack. Where through no fault of the organization they are vulnerable to attack. Zero Trust says – “You are trusted only to take one action, one time, in one place, and the moment that changes you are no longer trusted and must be validated again, regardless of your location, application, userID, etc”. Zero Trust is exactly what it says, “I do not trust anything, so I validate all the things”.
That is a neat theory, but what does that mean in practice? We need to restrict users to the absolute minimum required access to networks that have a tight series of ACL’s, to applications that can only communicate to those things they must communicate with, to devices segmented to the point they think they are alone on private networks, while being dynamic enough to have their sphere of trust changed as the organization evolves, and still enable management of those devices. The overall goal is to reduce the “blast radius” any compromise would allow in the organization, since it is not a question of “if” but “when” for a cyber attack.
So if my philosophy changes from “I know that and trust it” to “I cannot believe that is what it says it is” then what can I do? Especially when I consider I did not get 5x budget to deal with 5x more complexity. I look to the market. Good news! Every single security vendor is now telling me how they solve Zero Trust with their tool, platform, service, new shiny thing. So I ask questions. It seems to me they only really solve it according to marketing. Why? Because Zero Trust is hard. It is very hard. Complex, it requires change across the organization, not just tools, but the full trifecta of people, process, and technology, and not restricted to my technology team, but the entire organization, not one region, but globally. It is a lot.
All is not lost though, because Zero Trust isn’t a fixed outcome, it is a philosophy. It is not a tool, or an audit, or a process. I cannot buy it, nor can I certify it (no matter what people selling things will say). So that shows hope. Additionally, I always remember the truism; “Perfection is the enemy of Progress”, and I realize I can move the needle.
So I take a pragmatic view of security, through the lens of Zero Trust. I don’t aim to do everything all at once. Instead I look at what I am able to do and where I have existing skills. How is my organization designed, am I a hub and spoke where I have a core organization with shared services and largely independent business units? Maybe I have a mesh where the BU’s are distributed to where we organically integrated and staffed as we went through years of M&A, maybe we are fully integrated as an organization with one standard for everything. Maybe it is none of those.
I start by considering my capabilities and mapping my current state. Where is my organization on the NIST security framework model? Where do I think I could get with my current staff? Who do I have in my partner organization that can help me? Once I know where I am I then fork my focus.
One fork is on low hanging fruit that can be resolved in the short term. Can I add some firewall rules to better restrict VLAN’s that do not need to communicate? Can I audit user accounts and make sure we are following best practices for organization and permission assignment? Does MFA exist, and can I expand it’s use, or implement it for some critical systems?
My second fork is to develop an ecosystem of talent, organized around a security focused operating model, otherwise known as my long term plan. DevOps becomes SecDevOps, where security is integrated and first. My partners become more integrated and I look for, and acquire relationships with, new partners that fill my gaps. My teams are reorganized to support security by design AND practice. And I develop a training plan that includes the same focus on what we can do today (partner lunch and learns) with long term strategy (which may be up skilling my people with certifications).
This is the phase where we begin looking at a tools rationalization project. What do my existing tools not perform as needed in the new Zero Trust world, these will likely need to be replaced in the near term. What tools do I have that work well enough, but will need to be replaced at termination of the contract. What tools do I have that we will retain.
Finally where do we see the big, hard rocks being placed in our way? It is a given that our networks will need some redesign, and will need to be designed with automation in mind, because the rules, ACL’s, and VLAN’s will be far more complex than before, and changes will happen at a far faster pace than before. Automation is the only way this will work. The best part is modern automation is self documenting.
The wonderful thing about being pragmatic is we get to make positive change, have a long term goal in mind that we can all align on, focus on what we can change, while developing for the future. All wrapped in a communications layer for executive leadership, and an evolving strategy for the board. Eating the elephant one bite at a time.
The post Pragmatic view of Zero Trust appeared first on GigaOm.
Retrospective thoughts on KubeCon Europe 2022
I’m not going to lie. As I sit on a plane flying away from Valencia, I confess to have been taken aback by the scale of Kubecon Europe this year. In my defence, I wasn’t alone the volume of attendees appeared to take conference organisers and exhibitors by surprise, illustrated by the notable lack of water, (I was told) t-shirts and (at various points) taxis.
Keynotes were filled to capacity, and there was a genuine buzz from participants which seemed to fall into two camps: the young and cool, and the more mature and soberly dressed.
My time was largely spent in one-on-one meetings, analyst/press conferences and walking the stands, so I can’t comment on the engineering sessions. Across the piece however, there was a genuine sense of Kubernetes now being about the how, rather than the whether. For one reason or another, companies have decided they want to gain the benefits of building and deploying distributed, container-based applications.
Strangely enough, this wasn’t being seen as some magical sword that can slay the dragons of legacy systems and open the way to digital transformation the kool-aid was as absent as the water. Ultimately, enterprises have accepted that, from an architectural standpoint and for applications in general, the Kubernetes model is as good as any available right now, as a non-proprietary, well-supported open standard that they can get behind.
Virtualisation-based options and platform stacks are too heavyweight; serverless architectures are more applicable to specific use cases. So, if you want to build an application and you want it to be future-safe, the Kubernetes target is the one to aim for.
Whether to adopt Kubernetes might be a done deal, but how to adopt certainly is not. The challenge is not with Kubernetes itself, but everything that needs to go around it to make resulting applications enterprise-ready.
For example, they need to operate in compliance environments; data needs to be managed, protected, and served into an environment that doesn’t care too much about the state; integration tools are required with external and legacy systems; development pipelines need to be in place, robust and value-focused; IT Operations need a clear view of what’s running whereas a bill of materials, and the health of individual clusters; and disaster recovery is a must.
Kubernetes doesn’t do these things, opening the door to an ecosystem of solution vendors and (often CNCF-backed) open source projects. I could drill into these areas Service Mesh, GitOps, orchestration, observability, and backup but the broader point is that they are all evolving and coalescing around the need. As they increase in capability, barriers to adoption reduce and the number of potential use cases grows.
All of which puts the industry at an interesting juncture. It’s not that tooling isn’t ready: organizations are already successfully deploying applications based on Kubernetes. In many cases, however, they are doing more work than they need developers need insider knowledge of target environments, interfaces need to be integrated rather than using third-party APIs, higher-order management tooling (such as AIOps) has to be custom-deployed rather than recognising the norms of Kubernetes operations.
Solutions do exist, but they tend to be coming from relatively new vendors that are feature rather than platform players, meaning that end-user organisations have to choose their partners wisely, then build and maintain development and management platforms themselves rather than using pre-integrated tools from a singe vendor.
None of this is a problem per se, but it does create overheads for adopters, even if they gain earlier benefits from adopting the Kubernetes model. The value of first-mover advantage has to be weighed against that of investing time and effort in the current state of tooling: as a travel company once told me, “we want to be the world’s best travel site, not the world’s best platform engineers.”
So, Kubernetes may be inevitable, but equally, it will become simpler, enabling organisations to apply the architecture to an increasingly broad set of scenarios. For organisations yet to make the step towards Kubernetes, now may still be a good time to run a proof of concept though in some ways, that sip has sailed perhaps focus the PoC on what it means for working practices and structures, rather than determining whether the concepts work at all.
Meanwhile and perhaps most importantly, now is a very good moment for organisations to look for what scenarios Kubernetes works best “out of the box”, working with providers and reviewing architectural patterns to deliver proven results against specific, high-value needs these are likely to be by industry and by the domain (I could dig into this, but did I mention that I’m sitting on a plane? ).
Kubernetes might be a done deal, but that doesn’t mean it should be adopted wholesale before some of the peripheral detail is ironed out.
The post Retrospective thoughts on KubeCon Europe 2022 appeared first on GigaOm.
Google launches a website version of its Read Along education app for children – TechCrunch
Google has launched a website for its Read Along app for encouraging young children to practice reading. The website, which...
Geek+ raises another $100M for its warehouse robots – TechCrunch
Two things are for certain: 1) There continues to be a lot of excitement around warehouse robotics and 2) Geek+...
WhatsApp extends time limit to delete a message to 60 hours – TechCrunch
WhatsApp now allows you to delete a message for up to two days and 12 hours (60 hours in total),...
WhatsApp is adding new privacy options, including screenshot blocking and a stealth mode – TechCrunch
WhatsApp is introducing a small flurry of privacy-minded tweaks into the messaging app, the company announced on Tuesday. The Meta-owned...
Big funds ‘screwing with Series A market but not seed market’ says veteran VC Mike Hirshland – TechCrunch
Mike Hirshland is enjoying 2022. Despite the market’s zigs and zags, he has spent much of his time this past...
Social4 months ago
Web.com website builder review
Social3 years ago
CrashPlan for Small Business Review
Gadgets4 years ago
A fictional Facebook Portal videochat with Mark Zuckerberg – TechCrunch
Cars4 years ago
What’s the best cloud storage for you?
Mobile4 years ago
Memory raises $5M to bring AI to time tracking – TechCrunch
Social4 years ago
iPhone XS priciest yet in South Korea
Security4 years ago
Google latest cloud to be Australian government certified
Social4 years ago
Apple’s new iPad Pro aims to keep enterprise momentum