This was a bad year for the smartphone. For the first time, its seemingly unstoppable growth began to slow.
Things started off on a bad note in February, when Gartner recorded its first year-over-year decline since it began tracking the category. Not even the mighty Apple was immune from the trend. Last week, stocks took a hit as influential analyst Ming-Chi Kuo downgraded sales expectations for 2019.
People simply aren’t upgrading as fast as they used to. This is due in part to the fact that flagship phones are pretty good across the board. Manufacturers have painted themselves into a corner as they’ve battled it out over specs. There just aren’t as many compelling reasons to continually upgrade.
Of course, that’s not going to stop them from trying. Along with the standard upgrades to things like cameras, you can expect some radical rethinks of smartphone form factors, along with the first few pushes into 5G in the next calendar year.
If we’re lucky, there will be a few surprises along the way as well, but the following trends all look like no-brainers for 2019.
GUANGZHOU, CHINA – DECEMBER 06: Attendees look at 5G mobile phones at the Qualcomm stand during China Mobile Global Partner Conference 2018 at Poly World Trade Center Exhibition Hall on December 6, 2018 in Guangzhou, Guangdong Province of China. The three-day conference opened on Thursday, with the theme of 5G network. (Photo by VCG/VCG via Getty Images)
Let’s get this one out of the way, shall we? It’s a bit tricky — after all, plenty of publications are going to claim 2019 as “The Year of 5G,” but they’re all jumping the gun. It’s true that we’re going to see the first wave of 5G handsets appearing next year.
OnePlus and LG have committed to a handset and Samsung, being Samsung, has since committed to two. We’ve also seen promises of a Verizon 5G MiFi and whatever the hell this thing is from HTC and Sprint.
Others, most notably Apple, are absent from the list. The company is not expected to release a 5G handset until 2020. While that’s going to put it behind the curve, the truth of the matter is that 5G will arrive into this world as a marketing gimmick. When it does fully roll out, 5G has the potential to be a great, gaming-changing technology for smartphones and beyond. And while carriers have promised to begin rolling out the technology in the States early next year (AT&T even got a jump start), the fact of the matter is that your handset will likely spend a lot more time using 4G.
That is to say, until 5G becomes more ubiquitous, you’re going to be paying a hefty premium for a feature you barely use. Of course, that’s not going to stop hardware makers, component manufacturers and their carrier partners from rushing these devices to market as quickly as possible. Just be aware of your chosen carrier’s coverage map before shelling out that extra cash.
We’ve already seen two — well, one-and-a-half, really. And you can be sure we’ll see even more as smartphone manufacturers scramble to figure out the next big thing. After years of waiting, we’ve been pretty unimpressed with the foldable smartphone we’ve seen so far.
The Royole is fascinating, but its execution leaves something to be desired. Samsung’s prototype, meanwhile, is just that. The company made it the centerpiece of its recent developer conference, but didn’t really step out of the shadows with the product — almost certainly because they’re not ready to show off the full product.
Now that the long-promised technology is ready in consumer form, it’s a safe bet we’ll be seeing a number of companies exploring the form factor. That will no doubt be helped along by the fact that Google partnered with Samsung to create a version of Android tailored to the form factor — similar to its embrace of the top notch with Android Pie.
Of course, like 5G, these designs are going to come at a major premium. Once the initial novelty has worn off, the hardest task of all will be convincing consumers they need one in their life.
Bezels be damned. For better or worse, the notch has been a mainstay of flagship smartphones. Practically everyone (save for Samsung) has embraced the cutout in an attempt to go edge to edge. Even Google made it a part of Android (while giving the world a notch you can see from space with the Pixel 3 XL).
We’ve already seen (and will continue to see) a number of clever workarounds like Oppo’s pop-up. The pin hole/hole punch design found on the Huawei Nova 4 seems like a more reasonable route for a majority of camera manufacturers.
Embedded Fingerprint Readers
The flip side of the race to infinite displays is what to do with the fingerprint reader. Some moved it to the rear, while others, like Apple, did away with it in favor of face scanning. Of course, for those unable to register a full 3D face scan, that tech is pretty easy to spoof. For that reason, fingerprint scanners aren’t going away any time soon.
OnePlus’ 6T was among the first to bring the in-display fingerprint scanner to market, and it works like a charm. Here’s how the tech works (quoting from my own writeup from a few months ago):
When the screen is locked, a fingerprint icon pops up, showing you where to press. When the finger is in the right spot, the AMOLED display flashes a bright light to capture a scan of the surface from the reflected light. The company says it takes around a third of a second, though in my own testing, that number was closer to one second or sometimes longer as I negotiated my thumb into the right spot.
Samsung’s S10 is expected to bring that technology when it arrives around the February time frame, and I wouldn’t be surprised to see a lot of other manufacturers follow suit.
Cameras, cameras, cameras (also, cameras)
What’s the reasonable limit for rear-facing cameras? Two? Three? What about the five cameras on that leaked Nokia from a few months back? When does it stop being a phone back and start being a camera front? These are the sorts of existential crises we’ll have to grapple with as manufacturers continue to attempt differentiation through imagining.
Smartphone cameras are pretty good across the board these days, so one of the simple solutions has been simply adding more to the equation. LG’s latest offers a pretty reasonable example of how this will play out for many. The V40 ThinQ has two front and three rear-facing cameras. The three on the back are standard, super wide-angle and 2x optical zoom, offering a way to capture different types of images when a smartphone camera isn’t really capable of that kind of optical zoom in a thin form factor.
On the flip side, companies will also be investing a fair deal in software to help bring better shots to existing components. Apple and Google both demonstrated how a little AI and ML can go a long way toward improving image capture on their last handsets. Expect much of that to be focused on ultra-low light and zoom.
AT&T is reportedly closing in on a deal to sell a stake in DirecTV to TPG, a private-equity firm.
Unfortunately for customers hoping that AT&T will relinquish control of DirecTV, a Reuters report on Friday said the pending deal would give TPG a “minority stake” in AT&T’s satellite-TV subsidiary. On the other hand, a private-equity firm looking to wring value out of a declining business wouldn’t necessarily be better for DirecTV customers than AT&T is.
It’s also possible that AT&T could cede operational control of DirecTV even if it remains the majority owner. CNBC in November reported on one proposed deal in which “AT&T would retain majority economic ownership of the [DirecTV and U-verse TV] businesses, and would maintain ownership of U-verse infrastructure, including plants and fiber,” while the buyer of a DirecTV stake “would control the pay-TV distribution operations and consolidate the business on its books.”
The latest talks between AT&T and TPG are “exclusive,” with other bidders out of the running for now, Reuters wrote last week, citing anonymous sources. “The advanced talks with TPG are the culmination of an auction that AT&T ran for DirecTV for several months,” the report said.
DirecTV lost most of its value under AT&T ownership
AT&T bought DirecTV in 2015 for $49 billion and has reportedly been unable to get bids valuing the TV provider at even half that amount. “The exact price TPG is willing to pay could not be learned, but sources said the deal could value DirecTV at more than $15 billion,” Reuters wrote, suggesting that the months-long auction didn’t raise the price much, if at all.
Bloomberg also reported that AT&T and TPG are in exclusive talks over DirecTV. “A potential deal is weeks away, and the talks could still fall apart… The agreement being discussed is highly structured and would include preferred stock,” Bloomberg wrote, citing an anonymous source.
TPG says it manages $85 billion in assets including investments in dozens of technology companies.
AT&T lost 8 million customers
AT&T has lost nearly 8 million customers since early 2017 from its Premium TV services, which includes DirecTV satellite, U-verse wireline video, and the newer AT&T TV online service. Total customers in that category decreased from over 25 million in early 2017 to 17.1 million at the end of September 2020.
While the industrywide shift from cable and satellite TV to online streaming has hurt the business, AT&T itself accelerated DirecTV’s customer losses by repeatedly raising prices and removing promotional offers. AT&T just raised TV prices again last week. AT&T is scheduled to report earnings—including the latest TV-customer figures—on Wednesday.
Since the earliest days of warfare, commanders of forces in the field have sought greater awareness and control of what is now commonly referred to as the “battlespace”—a fancy word for all of the elements and conditions that shape and contribute to a conflict with an adversary, and all of the types of military power that can be brought to bear to achieve their objectives.
The clearer a picture military decision-makers have of the entire battlspace, the more well-informed their tactical and strategic decisions should be. Bringing computers into the mix in the 20th century meant a whole new set of challenges and opportunities, too. The ability of computers to sort through enormous piles of data to identify trends that aren’t obvious to people (something often referred to as “big data”) didn’t just open up new ways for commanders to get a view of the “big picture”—it let commanders see that picture closer and closer to real-time, too.
And time, as it turns out, is key. The problem that digital battlespace integration is intended to solve is reducing the time it takes commanders to close the “OODA loop,” a concept developed by US Air Force strategist Colonel John Boyd. OODA stands for “observe, orient, decide, act”—the decision loop made repeatedly in responding to unfolding events in a tactical environment (or just about anywhere else). OODA is largely an Air Force thing, but all the different branches of the military have similar concepts; the Army has long referred to the similar Lawson Command and Control Loop in its own literature.
By being able to maintain awareness of the unfolding situation, and respond to changes and challenges more quickly than an adversary can—by “getting inside” the opponent’s decision cycle—military commanders can in theory gain an advantage on them and shape events in their favor.
Whether it’s in the cockpit or at the command level, speeding up the sensing of a threat and the response to it (did Han really shoot first, or did he just close the OODA loop faster?) is seen by military strategists as the key to dominance of every domain of warfare. However, closing that loop above the tactical level has historically been a challenge, because the communications between the front lines and top-level commanders have rarely been effective at giving everyone a true picture of what’s going on. And for much of the past century, the US military’s “battlespace management” was designed for dealing with a particular type of Cold War adversary—and not the kind they ended up fighting for much of the last 30 years, either.
Now that the long tail of the Global War on Terror is tapering down to a thin tip, the Department of Defense faces the need to re-examine the lessons learned over the past three decades (and especially the last two). The risks of learning the wrong things are huge. Trillions of dollars have been spent for not much effect over the last few decades. The Army’s enormous (and largely failed) Future Combat Systems program and certain other big-ticket technology plays that tried to bake a digitized battlefield into a bigger package have, if anything, demonstrated why pulling off big visions of a totally digitally integrated battlefield carry major risks.
At the same time, other elements of the command, control, communication, computing, intelligence, surveillance and reconnaissance (or just “C4ISR” if you’re into the whole brevity thing) toolkit have been able to build on basic building blocks and be (relatively) successful. The difference has often been in the doctrine that guides how technology is applied, and in how grounded the vision behind that doctrine is in reality.
In the beginning, there was tactical command and control. The basic technical components of the early “integrated battlespace”—the automation of situational awareness through technologies such as radar with integrated “Identification, Friend or Foe” (IFF)—emerged during World II. But the modern concept of the integrated battlespace has its most obvious roots in the command and control (C2) systems of the early Cold War.
More specifically, they can be traced to one man: Ralph Benjamin, an electronic engineer for the Royal Naval Scientific Service. Benjamin, a Jewish refugee, went to work in 1944 for the Royal Naval Scientific Service in what was called the Admiralty Signals Establishment.
“They were going to call it the Admiralty Radar & Signals Establishment,” Benjamin recounted in an oral history for the IEEE, “and got as far as printing the first letterheads with ARSE, before deciding it might be more tactful to make it the Admiralty Signals & Radar Establishment (ASRE).” During the war, he worked on a team developing radar for submarines, and also on the Mark V IFF system.
As the war came to an end, he had begun working on how to improve the flow of C2 information across naval battle groups. It was in that endeavor that Benjamin developed and later patented the display cursor and trackball, the forerunner of the computer mouse as part of his work on the first electronic C2 system, called the Comprehensive Display System. CDS allowed data shared from all of a battle group’s sensors to be overlaid on a single display.
The basic design and architecture of Benjamin’s CDS was the foundation for nearly all of US and NATO digital C2 systems developed over the next 30 years. It led to the US Air Force’s Semi-Automatic Ground Environment (SAGE)—the system used to direct and control North American Air Defense (NORAD)—as well as the Navy Tactical Data System (NTDS), which reached the US fleet in the early 1960s. The same technology would be applied to handling antisubmarine warfare (much to the dismay of some Russian submarine commanders) with the ASWC&CS, deployed to Navy ships in the late 1960s and 1970s.
The core of Benjamin’s C2 system was a digital data link protocol today known as Link-11 (or MIL-STD-6011). Link-11 is a radio network protocol based on high frequency (HF) or ultra-high frequency (UHF) radio that can transfer data at rates of 1,364 or 2,250 bits per second. Link-11 remains a standard across NATO today, because of its ability to network units not in line of sight, and is used in some form across all the branches of the US military—along with a point-to-point version (Link-11B) and a handful of other tactical digital information link (TADIL) protocols. But all the way up through the 1990s, various attempts to create better, faster, and more applicable versions of Link-11 failed.
Alphabet soup: from C2 to C3I to C4ISR
Beyond air and naval operations control, C2 was mostly about human-to-human communications. The first efforts to computerize C2 on a broader level came from the top down, following the Cuban Missile Crisis.
In an effort to speed National Command Authority communications to units in the field in time of crisis, the Defense Department commissioned the Worldwide Military Command and Control System (WWMCCS, or “wimeks”). WWMCCS was intended to give the President, the Secretary of Defense, and Joint Chiefs of Staff a way to rapidly receive threat warnings and intelligence information, and to then quickly assign and direct actions through the operational command structure.
Initially, WWMCCS was assembled from a collection of federated systems built at different command levels—nearly 160 different computer systems, based on 30 different software systems, spread across 81 sites. And that loose assemblage of systems resulted in early failures. During the Six-Day War between Egypt and Israel in 1967, orders were sent by the Joint Chiefs of Staff to move the USS Liberty away from the Israeli coastline, and despite five high-priority messages to the ship sent through WWMCCS, none were received for over 13 hours. By then, the ship had already been attacked by the Israelis.
There would be other failures that would demonstrate the problems with the disjointed structure of C2 systems, even as improvements were made to WWMCCS and other similar tools throughout the 1970s. The evacuation of Saigon at the end of the Vietnam War, the Mayaguez Incident, and the debacle at Desert One during the attempted hostage rescue in Iran were the most visceral of these, as commanders failed to grasp conditions on the ground while disaster unfolded.
These cases, in addition to the failed readiness exercises Nifty Nugget and Proud Spirit in 1978 and 1979, were cited by John Boyd in a 1987 presentation entitled “Organic Design for Command and Control,” as was the DOD’s response to them:
…[M]ore and better sensors, more communications, more and better computers, more and better display devices, more satellites, more and better fusion centers, etc—all tied to one giant fully informed, fully capable C&C system. This way of thinking emphasizes hardware as the solution.
Boyd’s view was that this centralized, top-down approach would never be effective, because it failed to create the conditions key to success—conditions he saw as arising from things purely human, based on true understanding, collaboration, and leadership. “[C2] represents a top-down mentality applied in a rigid or mechanical (or electrical) way that ignores as well as stifles the implicit nature of human beings to deal with uncertainty, change, and stress,” Boyd noted.
Those were the elements missing from late Cold War efforts, and what had been called “C2” gained some more Cs and evolved into “C4I”—command, control, communications, computer, and intelligence—systems. Eventually, surveillance and reconnaissance would be tagged onto the initialism, turning it into “C4ISR.”
While there were notable improvements in some areas, such as sensors—as demonstrated by the Navy’s Aegis system and the Patriot missile system—there was still an unevenness of information sharing. And the Army’s C4I lacked any real digital command, control, and communications systems well into the 1990s. Most of the tasks involved were manual and required voice communications or even couriers to verify.
The Gulf War may not have been a true test of battlefield command and control, but it did hint at some of the elements that would both enhance and complicate the battlefield picture of the future. For example, it featured the first use of drones to perform battlefield targeting and intelligence collection—as well as the first surrender of enemy troops to a drone, when Iraqi troops on Faylaka Island signaled their surrender to the USS Wisconsin’s Pioneer RPV. The idea of having remotely controlled platforms that could provide actionable information networked into the battlefield information space—something I had seen the early hints of in the late 1980s.
DDoS-for-hire services are abusing the Microsoft Remote Desktop Protocol to increase the firepower of distributed denial-of-service attacks that paralyze websites and other online services, a security firm said this week.
Typically abbreviated as RDP, Remote Desktop Protocol is the underpinning for a Microsoft Windows feature that allows one device to log into another device over the Internet. RDP is mostly used by businesses to save employees the cost or hassle of having to be physically present when accessing a computer.
As is typical with many authenticated systems, RDP responds to login requests with a much longer sequence of bits that establish a connection between the two parties. So-called booter/stresser services, which for a fee will bombard Internet addresses with enough data to take them offline, have recently embraced RDP as a means to amplify their attacks, security firm Netscout said.
The amplification allows attackers with only modest resources to strengthen the size of the data they direct at targets. The technique works by bouncing a relatively small amount of data at the amplifying service, which in turn reflects a much larger amount of data at the final target. With an amplification factor of 85.9 to 1, 10 gigabytes-per-second of requests directed at an RDP server will deliver roughly 860Gbps to the target.
“Observed attack sizes range from ~20 Gbps – ~750 Gbps,” Netscout researchers wrote. “As is routinely the case with newer DDoS attack vectors, it appears that after an initial period of employment by advanced attackers with access to bespoke DDoS attack infrastructure, RDP reflection/amplification has been weaponized and added to the arsenals of so-called booter/stresser DDoS-for-hire services, placing it within the reach of the general attacker population.”
DDoS amplification attacks date back decades. As legitimate Internet users collectively block one vector, attackers find new ones to take their place. DDoS amplifiers have included open DNS resolvers, the WS-Discovery protocol used by IoT devices, and the Internet’s Network Time Protocol. One of the most powerful amplification vectors in recent memory is the so-called memcached protocol which has a factor of 51,000 to 1.
DDoS amplification attacks work by using UDP network packets, which are easily spoofable on many networks. An attacker sends the vector a request and spoofs the headers to give the appearance the request came from the target. The amplification vector then sends the response to the target whose address appears in the spoofed packets.
There are about 33,000 RDP servers on the Internet that can be abused in amplification attacks, Netscout said. Besides using UDP packets, RDP can also rely on TCP packets.
Netscout recommended that RDP servers be accessible only over virtual private network services. In the event RDP servers offering remote access over UDP can’t be immediately moved behind VPN concentrators, administrators should disable RDP over UDP as an interim measure.
Besides harming the Internet as a whole, unsecured RDP can be a hazard to the organizations that expose them to the Internet.
“The collateral impact of RDP reflection/amplification attacks is potentially quite high for organizations whose Windows RDP servers are abused as reflectors/amplifiers,” Netscout explained. “This may include partial or full interruption of mission-critical remote-access services, as well as additional service disruption due to transit capacity consumption, state-table exhaustion of stateful firewalls, load balancers, etc.”