Connect with us

Biz & IT

Google gives Android developers new tools to make money from users who won’t pay

Published

on

Google today is introducing a new way for Android developers to generate revenue from their mobile applications. And no, it’s not subscription-related. Instead, the company is launching a new monetization option for apps called “Rewarded Products.” This will allow non-paying app users to contribute to an app’s revenue stream by sacrificing their time, but not their money. The first product will be rewarded video, where users can opt to watch a video ad in exchange for in-game currency, virtual goods or other benefits.

The feature may make developers happy, but it remains to be seen how users react. Reception will depend on how the videos are introduced in the app.

Even in Google’s example of the rewarded product in action — meant to showcase a best-design practice, one would think — the video interrupts gameplay between levels with a full-screen takeover. This is not a scenario users would respond well to unless this was presented as the only way to play a popular, previously paid-only game for free, perhaps.Rewarded video has worked for some apps where users have come to expect a free product. That could include free-to-play games or other services where subscribing is an option, not a requirement.

For example, Pandora’s music streaming service was free and ad-supported for years, as it was radio-only. After it introduced tiers offering on-demand streaming to compete with Spotify, it rolled out a rewarded video product — so to speak — of its own. Today, Pandora listeners can choose to watch a video ad to access on-demand music for a session as an alternative to paying a monthly subscription.

Android app developers, of course, are already using advertisements to supplement, or as a means of, monetization, but this launch creates an official Google Play “product.” This makes implementation easier on developers and gives Google a way to compete with third parties offering something similar.

Rewarded products can be added to any app using the Google Play Billing Library or AIDL interface with only a few additional API calls, the company says. It won’t require an SDK.

The launch comes at a time when Apple has been seeing success with subscriptions, which it has fully embraced, pushed and sometimes even let run amok. Subscriptions are now one of the biggest factors, outside of games, in app store revenue growth.

But Android users, historically, have been more averse to paying for apps than those on iOS. Apple’s store has even seen nearly double that of Google Play in terms of revenue — despite having far fewer downloads. That means Android developers will not be able to tap into the subscription craze at the same scale as their iOS counterparts. And it means cross-platform developers may further prioritize building for iOS, as a result.

Rewarded products offer those developers an alternative path to monetization on a platform where that’s often been more difficult, outside of running ads.

Google says the rewarded video product is launching into open beta, and is available in the Play Console for developers.

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

AT&T may keep majority ownership of DirecTV as it closes in on final deal

Published

on

Enlarge / A DirecTV satellite dish seen outside a bar in Portland, Oregon, in October 2019.

AT&T is reportedly closing in on a deal to sell a stake in DirecTV to TPG, a private-equity firm.

Unfortunately for customers hoping that AT&T will relinquish control of DirecTV, a Reuters report on Friday said the pending deal would give TPG a “minority stake” in AT&T’s satellite-TV subsidiary. On the other hand, a private-equity firm looking to wring value out of a declining business wouldn’t necessarily be better for DirecTV customers than AT&T is.

It’s also possible that AT&T could cede operational control of DirecTV even if it remains the majority owner. CNBC in November reported on one proposed deal in which “AT&T would retain majority economic ownership of the [DirecTV and U-verse TV] businesses, and would maintain ownership of U-verse infrastructure, including plants and fiber,” while the buyer of a DirecTV stake “would control the pay-TV distribution operations and consolidate the business on its books.”

The latest talks between AT&T and TPG are “exclusive,” with other bidders out of the running for now, Reuters wrote last week, citing anonymous sources. “The advanced talks with TPG are the culmination of an auction that AT&T ran for DirecTV for several months,” the report said.

DirecTV lost most of its value under AT&T ownership

AT&T bought DirecTV in 2015 for $49 billion and has reportedly been unable to get bids valuing the TV provider at even half that amount. “The exact price TPG is willing to pay could not be learned, but sources said the deal could value DirecTV at more than $15 billion,” Reuters wrote, suggesting that the months-long auction didn’t raise the price much, if at all.

Bloomberg also reported that AT&T and TPG are in exclusive talks over DirecTV. “A potential deal is weeks away, and the talks could still fall apart… The agreement being discussed is highly structured and would include preferred stock,” Bloomberg wrote, citing an anonymous source.

TPG says it manages $85 billion in assets including investments in dozens of technology companies.

AT&T lost 8 million customers

AT&T has lost nearly 8 million customers since early 2017 from its Premium TV services, which includes DirecTV satellite, U-verse wireline video, and the newer AT&T TV online service. Total customers in that category decreased from over 25 million in early 2017 to 17.1 million at the end of September 2020.

While the industrywide shift from cable and satellite TV to online streaming has hurt the business, AT&T itself accelerated DirecTV’s customer losses by repeatedly raising prices and removing promotional offers. AT&T just raised TV prices again last week. AT&T is scheduled to report earnings—including the latest TV-customer figures—on Wednesday.

Continue Reading

Biz & IT

The history of the connected battlespace, part one: command, control, and conquer

Published

on

Enlarge / Believe it or not, this fictional version of NORAD shows off the idea of the “connected battlespace” even better than the reali thing.

MGM/UA

Since the earliest days of warfare, commanders of forces in the field have sought greater awareness and control of what is now commonly referred to as the “battlespace”—a fancy word for all of the elements and conditions that shape and contribute to a conflict with an adversary, and all of the types of military power that can be brought to bear to achieve their objectives.

The clearer a picture military decision-makers have of the entire battlspace, the more well-informed their tactical and strategic decisions should be. Bringing computers into the mix in the 20th century meant a whole new set of challenges and opportunities, too. The ability of computers to sort through enormous piles of data to identify trends that aren’t obvious to people (something often referred to as “big data”) didn’t just open up new ways for commanders to get a view of the “big picture”—it let commanders see that picture closer and closer to real-time, too.

And time, as it turns out, is key. The problem that digital battlespace integration is intended to solve is reducing the time it takes commanders to close the “OODA loop,” a concept developed by US Air Force strategist Colonel John Boyd. OODA stands for “observe, orient, decide, act”—the decision loop made repeatedly in responding to unfolding events in a tactical environment (or just about anywhere else). OODA is largely an Air Force thing, but all the different branches of the military have similar concepts; the Army has long referred to the similar Lawson Command and Control Loop in its own literature.

The OODA loop, with unfortunately grainy captioning. (See the linked PDF to view the diagram in context.)
Enlarge / The OODA loop, with unfortunately grainy captioning. (See the linked PDF to view the diagram in context.)

By being able to maintain awareness of the unfolding situation, and respond to changes and challenges more quickly than an adversary can—by “getting inside” the opponent’s decision cycle—military commanders can in theory gain an advantage on them and shape events in their favor.

Whether it’s in the cockpit or at the command level, speeding up the sensing of a threat and the response to it (did Han really shoot first, or did he just close the OODA loop faster?) is seen by military strategists as the key to dominance of every domain of warfare. However, closing that loop above the tactical level has historically been a challenge, because the communications between the front lines and top-level commanders have rarely been effective at giving everyone a true picture of what’s going on. And for much of the past century, the US military’s “battlespace management” was designed for dealing with a particular type of Cold War adversary—and not the kind they ended up fighting for much of the last 30 years, either.

Now that the long tail of the Global War on Terror is tapering down to a thin tip, the Department of Defense faces the need to re-examine the lessons learned over the past three decades (and especially the last two). The risks of learning the wrong things are huge. Trillions of dollars have been spent for not much effect over the last few decades. The Army’s enormous (and largely failed) Future Combat Systems program and certain other big-ticket technology plays that tried to bake a digitized battlefield into a bigger package have, if anything, demonstrated why pulling off big visions of a totally digitally integrated battlefield carry major risks.

At the same time, other elements of the command, control, communication, computing, intelligence, surveillance and reconnaissance (or just “C4ISR” if you’re into the whole brevity thing) toolkit have been able to build on basic building blocks and be (relatively) successful. The difference has often been in the doctrine that guides how technology is applied, and in how grounded the vision behind that doctrine is in reality.

Artist's impression of a military command and control console.
Enlarge / Artist’s impression of a military command and control console.

Milan_Jovic / Getty Images

Linking up

In the beginning, there was tactical command and control. The basic technical components of the early “integrated battlespace”—the automation of situational awareness through technologies such as radar with integrated “Identification, Friend or Foe” (IFF)—emerged during World II. But the modern concept of the integrated battlespace has its most obvious roots in the command and control (C2) systems of the early Cold War.

More specifically, they can be traced to one man: Ralph Benjamin, an electronic engineer for the Royal Naval Scientific Service. Benjamin, a Jewish refugee, went to work in 1944 for the Royal Naval Scientific Service in what was called the Admiralty Signals Establishment.

“They were going to call it the Admiralty Radar & Signals Establishment,” Benjamin recounted in an oral history for the IEEE, “and got as far as printing the first letterheads with ARSE, before deciding it might be more tactful to make it the Admiralty Signals & Radar Establishment (ASRE).” During the war, he worked on a team developing radar for submarines, and also on the Mark V IFF system.

As the war came to an end, he had begun working on how to improve the flow of C2 information across naval battle groups. It was in that endeavor that Benjamin developed and later patented the display cursor and trackball, the forerunner of the computer mouse as part of his work on the first electronic C2 system, called the Comprehensive Display System. CDS allowed data shared from all of a battle group’s sensors to be overlaid on a single display.

A SAGE weapons director console.
Enlarge / A SAGE weapons director console.

The basic design and architecture of Benjamin’s CDS was the foundation for nearly all of US and NATO digital C2 systems developed over the next 30 years. It led to the US Air Force’s Semi-Automatic Ground Environment (SAGE)—the system used to direct and control North American Air Defense (NORAD)—as well as the Navy Tactical Data System (NTDS), which reached the US fleet in the early 1960s. The same technology would be applied to handling antisubmarine warfare (much to the dismay of some Russian submarine commanders) with the ASWC&CS, deployed to Navy ships in the late 1960s and 1970s.

The core of Benjamin’s C2 system was a digital data link protocol today known as Link-11 (or MIL-STD-6011). Link-11 is a radio network protocol based on high frequency (HF) or ultra-high frequency (UHF) radio that can transfer data at rates of 1,364 or 2,250 bits per second. Link-11 remains a standard across NATO today, because of its ability to network units not in line of sight, and is used in some form across all the branches of the US military—along with a point-to-point version (Link-11B) and a handful of other tactical digital information link (TADIL) protocols. But all the way up through the 1990s, various attempts to create better, faster, and more applicable versions of Link-11 failed.

Alphabet soup: from C2 to C3I to C4ISR

Beyond air and naval operations control, C2 was mostly about human-to-human communications. The first efforts to computerize C2 on a broader level came from the top down, following the Cuban Missile Crisis.

In an effort to speed National Command Authority communications to units in the field in time of crisis, the Defense Department commissioned the Worldwide Military Command and Control System (WWMCCS, or “wimeks”). WWMCCS was intended to give the President, the Secretary of Defense, and Joint Chiefs of Staff a way to rapidly receive threat warnings and intelligence information, and to then quickly assign and direct actions through the operational command structure.

Initially, WWMCCS was assembled from a collection of federated systems built at different command levels—nearly 160 different computer systems, based on 30 different software systems, spread across 81 sites. And that loose assemblage of systems resulted in early failures. During the Six-Day War between Egypt and Israel in 1967, orders were sent by the Joint Chiefs of Staff to move the USS Liberty away from the Israeli coastline, and despite five high-priority messages to the ship sent through WWMCCS, none were received for over 13 hours. By then, the ship had already been attacked by the Israelis.

There would be other failures that would demonstrate the problems with the disjointed structure of C2 systems, even as improvements were made to WWMCCS and other similar tools throughout the 1970s. The evacuation of Saigon at the end of the Vietnam War, the Mayaguez Incident, and the debacle at Desert One during the attempted hostage rescue in Iran were the most visceral of these, as commanders failed to grasp conditions on the ground while disaster unfolded.

These cases, in addition to the failed readiness exercises Nifty Nugget and Proud Spirit in 1978 and 1979, were cited by John Boyd in a 1987 presentation entitled “Organic Design for Command and Control,” as was the DOD’s response to them:

…[M]ore and better sensors, more communications, more and better computers, more and better display devices, more satellites, more and better fusion centers, etc—all tied to one giant fully informed, fully capable C&C system. This way of thinking emphasizes hardware as the solution.

Boyd’s view was that this centralized, top-down approach would never be effective, because it failed to create the conditions key to success—conditions he saw as arising from things purely human, based on true understanding, collaboration, and leadership. “[C2] represents a top-down mentality applied in a rigid or mechanical (or electrical) way that ignores as well as stifles the implicit nature of human beings to deal with uncertainty, change, and stress,” Boyd noted.

Those were the elements missing from late Cold War efforts, and what had been called “C2” gained some more Cs and evolved into “C4I”—command, control, communications, computer, and intelligence—systems. Eventually, surveillance and reconnaissance would be tagged onto the initialism, turning it into “C4ISR.”

While there were notable improvements in some areas, such as sensors—as demonstrated by the Navy’s Aegis system and the Patriot missile system—there was still an unevenness of information sharing. And the Army’s C4I lacked any real digital command, control, and communications systems well into the 1990s. Most of the tasks involved were manual and required voice communications or even couriers to verify.

The Gulf War may not have been a true test of battlefield command and control, but it did hint at some of the elements that would both enhance and complicate the battlefield picture of the future. For example, it featured the first use of drones to perform battlefield targeting and intelligence collection—as well as the first surrender of enemy troops to a drone, when Iraqi troops on Faylaka Island signaled their surrender to the USS Wisconsin’s Pioneer RPV. The idea of having remotely controlled platforms that could provide actionable information networked into the battlefield information space—something I had seen the early hints of in the late 1980s.

Continue Reading

Biz & IT

DDoSers are abusing Microsoft RDP to make attacks more powerful

Published

on

Enlarge / Hacker attacking server or database. Network security, Database secure and personal data protection

Getty Images

DDoS-for-hire services are abusing the Microsoft Remote Desktop Protocol to increase the firepower of distributed denial-of-service attacks that paralyze websites and other online services, a security firm said this week.

Typically abbreviated as RDP, Remote Desktop Protocol is the underpinning for a Microsoft Windows feature that allows one device to log into another device over the Internet. RDP is mostly used by businesses to save employees the cost or hassle of having to be physically present when accessing a computer.

As is typical with many authenticated systems, RDP responds to login requests with a much longer sequence of bits that establish a connection between the two parties. So-called booter/stresser services, which for a fee will bombard Internet addresses with enough data to take them offline, have recently embraced RDP as a means to amplify their attacks, security firm Netscout said.

The amplification allows attackers with only modest resources to strengthen the size of the data they direct at targets. The technique works by bouncing a relatively small amount of data at the amplifying service, which in turn reflects a much larger amount of data at the final target. With an amplification factor of 85.9 to 1, 10 gigabytes-per-second of requests directed at an RDP server will deliver roughly 860Gbps to the target.

“Observed attack sizes range from ~20 Gbps – ~750 Gbps,” Netscout researchers wrote. “As is routinely the case with newer DDoS attack vectors, it appears that after an initial period of employment by advanced attackers with access to bespoke DDoS attack infrastructure, RDP reflection/amplification has been weaponized and added to the arsenals of so-called booter/stresser DDoS-for-hire services, placing it within the reach of the general attacker population.”

DDoS amplification attacks date back decades. As legitimate Internet users collectively block one vector, attackers find new ones to take their place. DDoS amplifiers have included open DNS resolvers, the WS-Discovery protocol used by IoT devices, and the Internet’s Network Time Protocol. One of the most powerful amplification vectors in recent memory is the so-called memcached protocol which has a factor of 51,000 to 1.

DDoS amplification attacks work by using UDP network packets, which are easily spoofable on many networks. An attacker sends the vector a request and spoofs the headers to give the appearance the request came from the target. The amplification vector then sends the response to the target whose address appears in the spoofed packets.

There are about 33,000 RDP servers on the Internet that can be abused in amplification attacks, Netscout said. Besides using UDP packets, RDP can also rely on TCP packets.

Netscout recommended that RDP servers be accessible only over virtual private network services. In the event RDP servers offering remote access over UDP can’t be immediately moved behind VPN concentrators, administrators should disable RDP over UDP as an interim measure.

Besides harming the Internet as a whole, unsecured RDP can be a hazard to the organizations that expose them to the Internet.

“The collateral impact of RDP reflection/amplification attacks is potentially quite high for organizations whose Windows RDP servers are abused as reflectors/amplifiers,” Netscout explained. “This may include partial or full interruption of mission-critical remote-access services, as well as additional service disruption due to transit capacity consumption, state-table exhaustion of stateful firewalls, load balancers, etc.”

Continue Reading

Trending