We’re familiar with Uber cozying up to scooter startups — it has bought one and invested in another — but over in India, the U.S. firm’s key rival is hatching a major alliance of its own as Ola invested $100 million in scooter rental startup Vogo.
Ola first invested back in August when Vogo raised an undisclosed Series A round from Ola, Matrix Partners and other investors, but now Ola is doubling down with this follow-on deal. It isn’t saying how much equity it has captured with this investment, nor the valuation that it gives Vogo, but you can well imagine it is high for a company that has only just done its Series A.
As you’d expect, this is a strategic investment and it’ll mean that Vogo scooters will appear within the Ola app, from where they can be booked by the company’s 150 million registered users, “soon.” Bangalore and Hyderabad are the two cities where Vogo operates, but you’d imagine that it will lean on Ola to expand into other parts of tier-one India where Ola already has a strong presence.
Ola’s money is going directly into supply, with Vogo planning to buy 100,000 more scooters for its platform. The company’s scooters, for those who don’t know them, are unlocked using a one-time password generated from the company’s Android app. Scooters are either dropped off at a designated station, or the rider specifies that they are taking a round trip and then returns it to the station where they started.
Ola CEO and co-founder Bhavish Aggarwal — pictured in the top image alongside Vogo CEO and founder Anand Ayyadurai — said he hopes that the deal and integration will improve last-mile transportation options across India.
A selection of screen captures from the Vogo Android app
“Our investment in Vogo will help build a smart multi-modal network for first-last mile connectivity in the country. Vogo’s automated scooter-sharing platform, backed by Ola’s expertise in this space can help transform our cities. Together, we are thrilled to be at the forefront of India’s rapidly growing micro-mobility market,” he said in a prepared statement.
Ola previously invested in its own bike rental service last year, although that category has struggled in India as Chinese imports like Ofo have fled the country after struggling to develop a sustainable business in the country, and others outside of China. Ola and Uber have offered motorbike taxis in India since 2016, but scooters offer a more individual approach.
Uber, for its part, doesn’t offer scooters in India at this point. But with India its second-largest market — it has reportedly crossed $1.6 billion in annualized bookings — you’d imagine that it is near the top of the company’s thoughts… although there is the business of that upcoming U.S. IPO to deal with.
AT&T is reportedly closing in on a deal to sell a stake in DirecTV to TPG, a private-equity firm.
Unfortunately for customers hoping that AT&T will relinquish control of DirecTV, a Reuters report on Friday said the pending deal would give TPG a “minority stake” in AT&T’s satellite-TV subsidiary. On the other hand, a private-equity firm looking to wring value out of a declining business wouldn’t necessarily be better for DirecTV customers than AT&T is.
It’s also possible that AT&T could cede operational control of DirecTV even if it remains the majority owner. CNBC in November reported on one proposed deal in which “AT&T would retain majority economic ownership of the [DirecTV and U-verse TV] businesses, and would maintain ownership of U-verse infrastructure, including plants and fiber,” while the buyer of a DirecTV stake “would control the pay-TV distribution operations and consolidate the business on its books.”
The latest talks between AT&T and TPG are “exclusive,” with other bidders out of the running for now, Reuters wrote last week, citing anonymous sources. “The advanced talks with TPG are the culmination of an auction that AT&T ran for DirecTV for several months,” the report said.
DirecTV lost most of its value under AT&T ownership
AT&T bought DirecTV in 2015 for $49 billion and has reportedly been unable to get bids valuing the TV provider at even half that amount. “The exact price TPG is willing to pay could not be learned, but sources said the deal could value DirecTV at more than $15 billion,” Reuters wrote, suggesting that the months-long auction didn’t raise the price much, if at all.
Bloomberg also reported that AT&T and TPG are in exclusive talks over DirecTV. “A potential deal is weeks away, and the talks could still fall apart… The agreement being discussed is highly structured and would include preferred stock,” Bloomberg wrote, citing an anonymous source.
TPG says it manages $85 billion in assets including investments in dozens of technology companies.
AT&T lost 8 million customers
AT&T has lost nearly 8 million customers since early 2017 from its Premium TV services, which includes DirecTV satellite, U-verse wireline video, and the newer AT&T TV online service. Total customers in that category decreased from over 25 million in early 2017 to 17.1 million at the end of September 2020.
While the industrywide shift from cable and satellite TV to online streaming has hurt the business, AT&T itself accelerated DirecTV’s customer losses by repeatedly raising prices and removing promotional offers. AT&T just raised TV prices again last week. AT&T is scheduled to report earnings—including the latest TV-customer figures—on Wednesday.
Since the earliest days of warfare, commanders of forces in the field have sought greater awareness and control of what is now commonly referred to as the “battlespace”—a fancy word for all of the elements and conditions that shape and contribute to a conflict with an adversary, and all of the types of military power that can be brought to bear to achieve their objectives.
The clearer a picture military decision-makers have of the entire battlspace, the more well-informed their tactical and strategic decisions should be. Bringing computers into the mix in the 20th century meant a whole new set of challenges and opportunities, too. The ability of computers to sort through enormous piles of data to identify trends that aren’t obvious to people (something often referred to as “big data”) didn’t just open up new ways for commanders to get a view of the “big picture”—it let commanders see that picture closer and closer to real-time, too.
And time, as it turns out, is key. The problem that digital battlespace integration is intended to solve is reducing the time it takes commanders to close the “OODA loop,” a concept developed by US Air Force strategist Colonel John Boyd. OODA stands for “observe, orient, decide, act”—the decision loop made repeatedly in responding to unfolding events in a tactical environment (or just about anywhere else). OODA is largely an Air Force thing, but all the different branches of the military have similar concepts; the Army has long referred to the similar Lawson Command and Control Loop in its own literature.
By being able to maintain awareness of the unfolding situation, and respond to changes and challenges more quickly than an adversary can—by “getting inside” the opponent’s decision cycle—military commanders can in theory gain an advantage on them and shape events in their favor.
Whether it’s in the cockpit or at the command level, speeding up the sensing of a threat and the response to it (did Han really shoot first, or did he just close the OODA loop faster?) is seen by military strategists as the key to dominance of every domain of warfare. However, closing that loop above the tactical level has historically been a challenge, because the communications between the front lines and top-level commanders have rarely been effective at giving everyone a true picture of what’s going on. And for much of the past century, the US military’s “battlespace management” was designed for dealing with a particular type of Cold War adversary—and not the kind they ended up fighting for much of the last 30 years, either.
Now that the long tail of the Global War on Terror is tapering down to a thin tip, the Department of Defense faces the need to re-examine the lessons learned over the past three decades (and especially the last two). The risks of learning the wrong things are huge. Trillions of dollars have been spent for not much effect over the last few decades. The Army’s enormous (and largely failed) Future Combat Systems program and certain other big-ticket technology plays that tried to bake a digitized battlefield into a bigger package have, if anything, demonstrated why pulling off big visions of a totally digitally integrated battlefield carry major risks.
At the same time, other elements of the command, control, communication, computing, intelligence, surveillance and reconnaissance (or just “C4ISR” if you’re into the whole brevity thing) toolkit have been able to build on basic building blocks and be (relatively) successful. The difference has often been in the doctrine that guides how technology is applied, and in how grounded the vision behind that doctrine is in reality.
In the beginning, there was tactical command and control. The basic technical components of the early “integrated battlespace”—the automation of situational awareness through technologies such as radar with integrated “Identification, Friend or Foe” (IFF)—emerged during World II. But the modern concept of the integrated battlespace has its most obvious roots in the command and control (C2) systems of the early Cold War.
More specifically, they can be traced to one man: Ralph Benjamin, an electronic engineer for the Royal Naval Scientific Service. Benjamin, a Jewish refugee, went to work in 1944 for the Royal Naval Scientific Service in what was called the Admiralty Signals Establishment.
“They were going to call it the Admiralty Radar & Signals Establishment,” Benjamin recounted in an oral history for the IEEE, “and got as far as printing the first letterheads with ARSE, before deciding it might be more tactful to make it the Admiralty Signals & Radar Establishment (ASRE).” During the war, he worked on a team developing radar for submarines, and also on the Mark V IFF system.
As the war came to an end, he had begun working on how to improve the flow of C2 information across naval battle groups. It was in that endeavor that Benjamin developed and later patented the display cursor and trackball, the forerunner of the computer mouse as part of his work on the first electronic C2 system, called the Comprehensive Display System. CDS allowed data shared from all of a battle group’s sensors to be overlaid on a single display.
The basic design and architecture of Benjamin’s CDS was the foundation for nearly all of US and NATO digital C2 systems developed over the next 30 years. It led to the US Air Force’s Semi-Automatic Ground Environment (SAGE)—the system used to direct and control North American Air Defense (NORAD)—as well as the Navy Tactical Data System (NTDS), which reached the US fleet in the early 1960s. The same technology would be applied to handling antisubmarine warfare (much to the dismay of some Russian submarine commanders) with the ASWC&CS, deployed to Navy ships in the late 1960s and 1970s.
The core of Benjamin’s C2 system was a digital data link protocol today known as Link-11 (or MIL-STD-6011). Link-11 is a radio network protocol based on high frequency (HF) or ultra-high frequency (UHF) radio that can transfer data at rates of 1,364 or 2,250 bits per second. Link-11 remains a standard across NATO today, because of its ability to network units not in line of sight, and is used in some form across all the branches of the US military—along with a point-to-point version (Link-11B) and a handful of other tactical digital information link (TADIL) protocols. But all the way up through the 1990s, various attempts to create better, faster, and more applicable versions of Link-11 failed.
Alphabet soup: from C2 to C3I to C4ISR
Beyond air and naval operations control, C2 was mostly about human-to-human communications. The first efforts to computerize C2 on a broader level came from the top down, following the Cuban Missile Crisis.
In an effort to speed National Command Authority communications to units in the field in time of crisis, the Defense Department commissioned the Worldwide Military Command and Control System (WWMCCS, or “wimeks”). WWMCCS was intended to give the President, the Secretary of Defense, and Joint Chiefs of Staff a way to rapidly receive threat warnings and intelligence information, and to then quickly assign and direct actions through the operational command structure.
Initially, WWMCCS was assembled from a collection of federated systems built at different command levels—nearly 160 different computer systems, based on 30 different software systems, spread across 81 sites. And that loose assemblage of systems resulted in early failures. During the Six-Day War between Egypt and Israel in 1967, orders were sent by the Joint Chiefs of Staff to move the USS Liberty away from the Israeli coastline, and despite five high-priority messages to the ship sent through WWMCCS, none were received for over 13 hours. By then, the ship had already been attacked by the Israelis.
There would be other failures that would demonstrate the problems with the disjointed structure of C2 systems, even as improvements were made to WWMCCS and other similar tools throughout the 1970s. The evacuation of Saigon at the end of the Vietnam War, the Mayaguez Incident, and the debacle at Desert One during the attempted hostage rescue in Iran were the most visceral of these, as commanders failed to grasp conditions on the ground while disaster unfolded.
These cases, in addition to the failed readiness exercises Nifty Nugget and Proud Spirit in 1978 and 1979, were cited by John Boyd in a 1987 presentation entitled “Organic Design for Command and Control,” as was the DOD’s response to them:
…[M]ore and better sensors, more communications, more and better computers, more and better display devices, more satellites, more and better fusion centers, etc—all tied to one giant fully informed, fully capable C&C system. This way of thinking emphasizes hardware as the solution.
Boyd’s view was that this centralized, top-down approach would never be effective, because it failed to create the conditions key to success—conditions he saw as arising from things purely human, based on true understanding, collaboration, and leadership. “[C2] represents a top-down mentality applied in a rigid or mechanical (or electrical) way that ignores as well as stifles the implicit nature of human beings to deal with uncertainty, change, and stress,” Boyd noted.
Those were the elements missing from late Cold War efforts, and what had been called “C2” gained some more Cs and evolved into “C4I”—command, control, communications, computer, and intelligence—systems. Eventually, surveillance and reconnaissance would be tagged onto the initialism, turning it into “C4ISR.”
While there were notable improvements in some areas, such as sensors—as demonstrated by the Navy’s Aegis system and the Patriot missile system—there was still an unevenness of information sharing. And the Army’s C4I lacked any real digital command, control, and communications systems well into the 1990s. Most of the tasks involved were manual and required voice communications or even couriers to verify.
The Gulf War may not have been a true test of battlefield command and control, but it did hint at some of the elements that would both enhance and complicate the battlefield picture of the future. For example, it featured the first use of drones to perform battlefield targeting and intelligence collection—as well as the first surrender of enemy troops to a drone, when Iraqi troops on Faylaka Island signaled their surrender to the USS Wisconsin’s Pioneer RPV. The idea of having remotely controlled platforms that could provide actionable information networked into the battlefield information space—something I had seen the early hints of in the late 1980s.
DDoS-for-hire services are abusing the Microsoft Remote Desktop Protocol to increase the firepower of distributed denial-of-service attacks that paralyze websites and other online services, a security firm said this week.
Typically abbreviated as RDP, Remote Desktop Protocol is the underpinning for a Microsoft Windows feature that allows one device to log into another device over the Internet. RDP is mostly used by businesses to save employees the cost or hassle of having to be physically present when accessing a computer.
As is typical with many authenticated systems, RDP responds to login requests with a much longer sequence of bits that establish a connection between the two parties. So-called booter/stresser services, which for a fee will bombard Internet addresses with enough data to take them offline, have recently embraced RDP as a means to amplify their attacks, security firm Netscout said.
The amplification allows attackers with only modest resources to strengthen the size of the data they direct at targets. The technique works by bouncing a relatively small amount of data at the amplifying service, which in turn reflects a much larger amount of data at the final target. With an amplification factor of 85.9 to 1, 10 gigabytes-per-second of requests directed at an RDP server will deliver roughly 860Gbps to the target.
“Observed attack sizes range from ~20 Gbps – ~750 Gbps,” Netscout researchers wrote. “As is routinely the case with newer DDoS attack vectors, it appears that after an initial period of employment by advanced attackers with access to bespoke DDoS attack infrastructure, RDP reflection/amplification has been weaponized and added to the arsenals of so-called booter/stresser DDoS-for-hire services, placing it within the reach of the general attacker population.”
DDoS amplification attacks date back decades. As legitimate Internet users collectively block one vector, attackers find new ones to take their place. DDoS amplifiers have included open DNS resolvers, the WS-Discovery protocol used by IoT devices, and the Internet’s Network Time Protocol. One of the most powerful amplification vectors in recent memory is the so-called memcached protocol which has a factor of 51,000 to 1.
DDoS amplification attacks work by using UDP network packets, which are easily spoofable on many networks. An attacker sends the vector a request and spoofs the headers to give the appearance the request came from the target. The amplification vector then sends the response to the target whose address appears in the spoofed packets.
There are about 33,000 RDP servers on the Internet that can be abused in amplification attacks, Netscout said. Besides using UDP packets, RDP can also rely on TCP packets.
Netscout recommended that RDP servers be accessible only over virtual private network services. In the event RDP servers offering remote access over UDP can’t be immediately moved behind VPN concentrators, administrators should disable RDP over UDP as an interim measure.
Besides harming the Internet as a whole, unsecured RDP can be a hazard to the organizations that expose them to the Internet.
“The collateral impact of RDP reflection/amplification attacks is potentially quite high for organizations whose Windows RDP servers are abused as reflectors/amplifiers,” Netscout explained. “This may include partial or full interruption of mission-critical remote-access services, as well as additional service disruption due to transit capacity consumption, state-table exhaustion of stateful firewalls, load balancers, etc.”