Connect with us

Social

Beluga Whale Retrieves Drowning Phone, Wins Internet With Good Samaritan Deed Caught on Video

Published

on

Last week, the Internet exploded with stories of a beluga whale spotted in Norway that was wearing a harness with mounts for an action camera. Many believed it to be a ‘specially trained’ mammal conditioned for spying by Russians. While that story made some waves, another tale of a beluga whale – but with a more feel-good appeal – is making rounds of the social media. Apparently, the whale in question recently retrieved a lucky individual’s phone after it fell into deep waters. But that’s not all. The helpful beluga looked elated after receiving friendly pats for its good deed.

Stories of a human-friendly beluga whale having a ‘great time’ in Norwegian waters has been making headlines, with some calling the mammal a pawn in a sinister espionage plot. But a jovial beluga whale’s recent deed is putting a question mark on all such assumptions. A video posted on Instagram by Isa Opdahl shows the beluga whale rising to the surface with a phone in its mouth that was accidentally dropped in the water.

While we don’t know the phone’s status after being submerged in water, the beluga whale’s act of a good Samaritan is certainly dropping many jaws and leaving netizens in awe. In the video, the mammal can be seen holding the phone steadily in its jaws and then returning to the depths after receiving some gentle taps from people on a boat like a ‘good boi’.The caption of the video, ‘when animals are kinder than humans’, leaves a strong impression for those doubting the beluga whale’s intentions.

However, we are not sure whether it is the same beluga whale that has been accused of being a Russian spy, or if it is a different one, but social media users believe the former. The whale has been named ‘Whaledimir’ after Russian president Vladimir Putin, with the name being finalised after around 25,000 people voted on the beluga whale’s baptism.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Social

New privacy bill would put major limits on targeted advertising – TechCrunch

Published

on

A new bill seeks to dramatically reshape the online advertising landscape to the detriment of companies like Facebook, Google and data brokers that leverage deep stores of personal information to make money from targeted ads.

The bill, the Banning Surveillance Advertising Act, introduced by Reps. Anna Eshoo (D-CA) and Jan Schakowsky (D-IL) in the House and Cory Booker (D-NJ) in the Senate, would dramatically limit the ways that tech companies serve ads to their users, banning the use of personal data altogether.

Any targeting based on “protected class information, such as race, gender, and religion, and personal data purchased from data brokers” would be off-limits were the bill to pass. Platforms could still target ads based on general location data at the city or state level and “contextual advertising” based on the content a user is interacting with would still be allowed.

The bill would empower the FTC and state attorneys general to enforce violations, with fines of up to $5,000 per incident for knowing violations.

“The ‘surveillance advertising’ business model is premised on the unseemly collection and hoarding of personal data to enable ad targeting,” Rep. Eshoo said. “This pernicious practice allows online platforms to chase user engagement at great cost to our society, and it fuels disinformation, discrimination, voter suppression, privacy abuses, and so many other harms.”

Sen. Booker called the targeted advertising model “predatory and invasive,” stressing how the practice exacerbates misinformation and extremism on social media platforms.

Privacy-minded companies including search engine maker DuckDuckGo and Proton, creator of ProtonMail, backed the legislation along with organizations including the Electronic Privacy Information Center (EPIC), the Anti-Defamation League, Accountable Tech and Common Sense Media.

Continue Reading

Social

Snapchat says it’s getting better at finding illicit drug dealers before users do – TechCrunch

Published

on

Snapchat has faced increasing criticism in recent years as the opioid crisis plays out on social media, often with tragic results.

In October, an NBC investigation reported the stories of a number of young people aged 13 to 23 who died after purchasing fentanyl-laced pills on Snapchat. Snapchat parent company Snap responded by committing to improve its ability to detect and remove this kind of content and ushering users who search for drug-related content to an educational harm reduction portal.

Snapchat provided a glimpse at its progress against illicit drug sales on the platform, noting that 88 percent of the drug-related content it finds is now identified proactively by automated systems, with community reporting accounting for the other 12 percent. Snap says this number is up by a third since its October update, indicating that more of this content is being detected up front before being identified by users.

“Since this fall, we have also seen another important indicator of progress: a decline in community-reported content related to drug sales,” Snap wrote in a blog post. “In September, over 23% of drug-related reports from Snapchatters contained content specifically related to sales, and as a result of proactive detection work, we have driven that down to 16% as of this month. This marks a decline of 31% in drug-related reports. We will keep working to get this number as low as possible.”

The company says that it also recently introduced a new safeguard that prevents 13 to 17 year-old users from showing up in its Quick Add user search results unless they have friends in common with the person searching. That precaution is meant to discourage minors from connecting with users they don’t know, in this case to deter online drug transactions.

Snapchat is also adding information from the CDC on the dangers of fentanyl into its “Heads Up” harm reduction portal and partnering with the Community Anti-Drug Coalitions of America (CADCA), a global nonprofit working to “prevent substance misuse through collaborative community efforts.”

The company works with experts to identify new search terms that sellers use to get around its rules against selling illicit substances. Snapchat calls the work to keep its lexicon of drug sales jargon up to date “a constant, ongoing effort.”

The U.S. Drug Enforcement Administration published a warning last month about the dangers of pills purchased online that contain fentanyl, a synthetic opioid that is deadlier in much smaller doses than heroin. Because fentanyl increasingly shows up in illicitly purchased drugs, including those purchased online, it can prove fatal to users who believed they were ingesting other substances.

In December, DEA Administrator Anne Milgram called Snapchat and other social media apps “haven[s] for drug traffickers” in a December interview with CBS. “Because drug traffickers are harnessing social media because it is accessible, they’re able to access millions of Americans and it is anonymous and they’re able to sell these fake pills that are not what they say they are,” Milgram said.

While social media platforms dragged their feet about investing in proactive, aggressive content moderation, online drug sales took root. Companies have sealed up some of the more obvious ways to find illicit drugs online (a few years ago it was as simple as searching #painpills on Instagram, for instance) but savvy sellers adapt their practices to get around new rules as they’re made.

The rise of fentanyl is a significant factor exacerbating the American opioid epidemic and the substance’s prevalence in online sales presents unique challenges. In an October hearing on children’s online safety, Snap called the issue the company’s “top priority,” but many lawmakers and families affected by online drug sales remain skeptical that social media companies are taking their role in the opioid crisis seriously.

 

Continue Reading

Social

Twitter expands misinformation reporting feature to more international markets – TechCrunch

Published

on

Last August, Twitter introduced a new feature in select markets, including the U.S., that invited users to report misinformation they encountered on its platform — including things like election-related or Covid-19 misinformation, for example. Now the company is rolling out the feature to more markets as its test expands. In addition to the U.S., Australia, and South Korea, where the feature had already gone live, Twitter is rolling out the reporting option to users in Brazil, Spain, and the Philippines.

The company also offered an update on the feature’s traction, noting that the company has received more than 3.7 million user-submitted reports since its debut. For context, Twitter has around 211 million monetizable active daily users, as of its most recent earnings, 37 million of which are U.S.-based and 174 million based in international markets.

According to Yoel Roth, Twitter’s head of site integrity, the “vast majority” of content the company takes action on for misinformation is identified proactively through automation (which accounts for 50%+ of enforcements) or proactive monitoring. User-submitted reports via the new feature, however, Twitter to identify patterns of misinformation — an area where Twitter has seen the most success so far from the feature, Roth says. This is particularly true in areas like non-text-baed misinformation like media and URLs that link to content hosted off Twitter’s platform.

But he also noted that when Twitter reviewed a subset of individual reported tweets, only around 10% were considered “actionable” compared with 20-30% in other policy areas, as many tweets analyzed didn’t contain misinformation at all.

In markets where the feature is available, users can report misinformation by clicking the three-dot menu in the upper-right of a tweet, then choosing the “report tweet” option. From there, they’ll be able to click the option “it’s misleading.”

While Twitter already offered a way to report violating content on its platform before the addition of the new flagging option, its existing reporting flow didn’t offer a clear way to report tweets containing misinformation. Instead, users would have to pick from options like “it’s suspicious or spam” or “it’s abusive or harmful,” among others, before further narrowing down how the specific tweet was in violation of Twitter’s rules.

The ability to flag tweets as misinformation allows users to more quickly and directly flag content that may not fit into existing rules, as well. But the reports themselves are tied into Twitter’s existing enforcement flow, where a combination of human review and moderation is used to determine if a punitive action should take place. Twitter had also said the reported tweets would be sorted for review based on priority — meaning tweets from accounts with a large following or those showing higher levels of engagement would be reviewed first.

The feature is rolling out at a time when social networks are being pressured to clean up the misinformation they’ve allowed to spread across their platforms, or risk regulation that will enforce such cleanups and perhaps even enact penalties for not doing so.

The flagging option is not the only way Twitter is working to fight misinformation. The company also runs an experiment called Birdwatch, which aims to crowdsource fact-checking by allowing Twitter users to annotate misleading tweets with factual information. This service is still in pilot testing and being updated based on user feedback.

Continue Reading

Trending