Connect with us

Biz & IT

We finally started taking screen time seriously in 2018

Published

on

At the beginning of this year, I was using my iPhone to browse new titles on Amazon when I saw the cover of “How to Break Up With Your Phone” by Catherine Price. I downloaded it on Kindle because I genuinely wanted to reduce my smartphone use, but also because I thought it would be hilarious to read a book about breaking up with your smartphone on my smartphone (stupid, I know). Within a couple of chapters, however, I was motivated enough to download Moment, a screen time tracking app recommended by Price, and re-purchase the book in print.

Early in “How to Break Up With Your Phone,” Price invites her readers to take the Smartphone Compulsion Test, developed by David Greenfield, a psychiatry professor at the University of Connecticut who also founded the Center for Internet and Technology Addiction. The test has 15 questions, but I knew I was in trouble after answering the first five. Humbled by my very high score, which I am too embarrassed to disclose, I decided it was time to get serious about curtailing my smartphone usage.

Of the chapters in Price’s book, the one called “Putting the Dope in Dopamine” resonated with me the most. She writes that “phones and most apps are deliberately designed without ‘stopping cues’ to alert us when we’ve had enough—which is why it’s so easy to accidentally binge. On a certain level, we know that what we’re doing is making us feel gross. But instead of stopping, our brains decide the solution is to seek out more dopamine. We check our phones again. And again. And again.”

Gross was exactly how I felt. I bought my first iPhone in 2011 (and owned an iPod Touch before that). It was the first thing I looked at in the morning and the last thing I saw at night. I would claim it was because I wanted to check work stuff, but really I was on autopilot. Thinking about what I could have accomplished over the past eight years if I hadn’t been constantly attached to my smartphone made me feel queasy. I also wondered what it had done to my brain’s feedback loop. Just as sugar changes your palate, making you crave more and more sweets to feel sated, I was worried that the incremental doses of immediate gratification my phone doled out would diminish my ability to feel genuine joy and pleasure.

Price’s book was published in February, at the beginning of a year when it feels like tech companies finally started to treat excessive screen time as a liability (or at least do more than pay lip service to it). In addition to the introduction of Screen Time in iOS 12 and Android’s digital wellbeing tools, Facebook, Instagram and YouTube all launched new features that allow users to track time spent on their sites and apps.

Early this year, influential activist investors who hold Apple shares also called for the company to focus on how their devices impact kids. In a letter to Apple, hedge fund Jana Partners and California State Teachers’ Retirement System (CalSTRS) wrote “social media sites and applications for which the iPhone and iPad are a primary gateway are usually designed to be as addictive and time-consuming as possible, as many of their original creators have publicly acknowledged,” adding that “it is both unrealistic and a poor long-term business strategy to ask parents to fight this battle alone.”

The growing mound of research

Then in November, researchers at Penn State released an important new study that linked social media usage by adolescents to depression. Led by psychologist Melissa Hunt, the experimental study monitored 143 students with iPhones from the university for three weeks. The undergraduates were divided into two groups: one was instructed to limit their time on social media, including Facebook, Snapchat and Instagram, to just 10 minutes each app per day (their usage was confirmed by checking their phone’s iOS battery use screens). The other group continued using social media apps as they usually did. At the beginning of the study, a baseline was established with standard tests for depression, anxiety, social support and other issues, and each group continued to be assessed throughout the experiment.

The findings, published in the Journal of Social and Clinical Psychology, were striking. The researchers wrote that “the limited use group showed significant reductions in loneliness and depression over three weeks compared to the control group.”

Even the control group benefitted, despite not being given limits on their social media use. “Both groups showed significant decreases in anxiety and fear of missing out over baselines, suggesting a benefit of increased self-monitoring,” the study said. “Our findings strongly suggest that limiting social media use to approximately 30 minutes a day may lead to significant improvement in well-being.”

Other academic studies published this year added to the growing roster of evidence that smartphones and mobile apps can significantly harm your mental and physical wellbeing.

A group of researchers from Princeton, Dartmouth, the University of Texas at Austin, and Stanford published a study in the Journal of Experimental Social Psychology that found using smartphones to take photos and videos of an experience actually reduces the ability to form memories of it. Others warned against keeping smartphones in your bedroom or even on your desk while you work. Optical chemistry researchers at the University of Toledo found that blue light from digital devices can cause molecular changes in your retina, potentially speeding macular degeneration.

So over the past 12 months, I’ve certainly had plenty of motivation to reduce my screen time. In fact, every time I checked the news on my phone, there seemed to be yet another headline about the perils of smartphone use. I began using Moment to track my total screen time and how it was divided between apps. I took two of Moment’s in-app courses, “Phone Bootcamp” and “Bored and Brilliant.” I also used the app to set a daily time limit, turned on “tiny reminders,” or push notifications that tell you how much time you’ve spent on your phone so far throughout the day, and enabled the “Force Me Off When I’m Over” feature, which basically annoys you off your phone when you go over your daily allotment.

At first I managed to cut my screen time in half. I had thought some of the benefits, like a better attention span mentioned in Price’s book, were too good to be true. But I found my concentration really did improve significantly after just a week of limiting my smartphone use. I read more long-form articles, caught up on some TV shows, and finished knitting a sweater for my toddler. Most importantly, the nagging feeling I had at the end of each day about frittering all my time away diminished, and so I lived happily after, snug in the knowledge that I’m not squandering my life on memes, clickbait and makeup tutorials.

Just kidding.

Holding my iPod Touch in 2010, a year before I bought my first smartphone and back when I still had an attention span.

After a few weeks, my screen time started creeping up again. First I turned off Moment’s “Force Me Off” feature, because my apartment doesn’t have a landline and I needed to be able to check texts from my husband. I kept the tiny reminders, but those became easier and easier to ignore. But even as I mindlessly scrolled through Instagram or Reddit, I felt the existentialist dread of knowing that I was misusing the best years of my life. With all that at stake, why is limiting screen time so hard?

I wish I knew how to quit you, small device

I decided to talk to the CEO of Moment, Tim Kendall, for some insight. Founded in 2014 by UI designer and iOS developer Kevin Holesh, Moment recently launched an Android version, too. It’s one of the best known of a genre that includes Forest, Freedom, Space, Off the Grid, AntiSocial and App Detox, all dedicated to reducing screen time (or at least encouraging more mindful smartphone use).

Kendall told me that I’m not alone. Moment has 7 million users and “over the last four years, you can see that average usage goes up every year,” he says. By looking at overall data, Moment’s team can tell that its tools and courses do help people reduce their screen time, but that often it starts creeping up again. Combating that with new features is one of the company’s main goals for next year.

“We’re spending a lot of time investing in R&D to figure out how to help people who fall into that category. They did Phone Bootcamp, saw nice results, saw benefits, but they just weren’t able to figure out how to do it sustainably,” says Kendall. Moment already releases new courses regularly (recent topics have included sleep, attention span, and family time) and recently began offering them on a subscription basis.

“It’s habit formation and sustained behavior change that is really hard,” says Kendall, who previously held positions as president at Pinterest and Facebook’s director of monetization. But he’s optimistic. “It’s tractable. People can do it. I think the rewards are really significant. We aren’t stopping with the courses. We are exploring a lot of different ways to help people.”

As Jana Partners and CalSTRS noted in their letter, a particularly important issue is the impact of excessive smartphone use on the first generation of teenagers and young adults to have constant access to the devices. Kendall notes that suicide rates among teenagers have increased dramatically over the past two decades. Though research hasn’t explicitly linked time spent online to suicide, the link between screen time and depression has been noted many times already, as in the Penn State study.

But there is hope. Kendall says that the Moment Coach feature, which delivers short, daily exercises to reduce smartphone use, seems to be particularly effective among millennials, the generation most stereotypically associated with being pathologically attached to their phones. “It seems that 20- and 30-somethings have an easier time internalizing the coach and therefore reducing their usage than 40- and 50-somethings,” he says.

Kendall stresses that Moment does not see smartphone use as an all-or-nothing proposition. Instead, he believes that people should replace brain junk food, like social media apps, with things like online language courses or meditation apps. “I really do think the phone used deliberately is one of the most wonderful things you have,” he says.

Researchers have found that taking smartphone photos and videos during an experience may decrease your ability to form memories of it. (Steved_np3/Getty Images)

I’ve tried to limit most of my smartphone usage to apps like Kindle, but the best solution has been to find offline alternatives to keep myself distracted. For example, I’ve been teaching myself new knitting and crochet techniques, because I can’t do either while holding my phone (though I do listen to podcasts and audiobooks). It also gives me a tactile way to measure the time I spend off my phone because the hours I cut off my screen time correlate to the number of rows I complete on a project. To limit my usage to specific apps, I rely on iOS Screen Time. It’s really easy to just tap “Ignore Limit,” however, so I also continue to depend on several of Moment’s features.

While several third-party screen time tracking app developers have recently found themselves under more scrutiny by Apple, Kendall says the launch of Screen Time hasn’t significantly impacted Moment’s business or sign ups. The launch of their Android version also opens up a significant new market (Android also enables Moment to add new features that aren’t possible on iOS, including only allowing access to certain apps during set times).

The short-term impact of iOS Screen Time has “been neutral, but I think in the long-term it’s really going to help,” Kendall says. “I think in the long-term it’s going to help with awareness. If I were to use a diet metaphor, I think Apple has built a terrific calorie counter and scale, but unfortunately they have not given people nutritional guidelines or a regimen. If you talk to any behavioral economist, not withstanding all that’s been said about the quantified self, numbers don’t really motivate people.”

Guilting also doesn’t work, at least not for the long-term, so Moment tries to take “a compassionate voice,” he adds. “That’s part of our brand and company and ethos. We don’t think we’ll be very helpful if people feel judged when we use our product. They need to feel cared for and supported, and know that the goal is not perfection, it’s gradual change.”

Many smartphone users are probably in my situation: alarmed by their screen time stats, unhappy about the time they waste, but also finding it hard to quit their devices. We don’t just use our smartphones to distract ourselves or get a quick dopamine rush with social media likes. We use it to manage our workload, keep in touch with friends, plan our days, read books, look up recipes, and find fun places to go. I’ve often thought about buying a Yondr bag or asking my husband to hide my phone from me, but I know that ultimately won’t help.

As cheesy as it sounds, the impetus for change must come from within. No amount of academic research, screen time apps, or analytics can make up for that.

One thing I tell myself is that unless developers find more ways to force us to change our behavior or another major paradigm shift occurs in mobile communications, my relationship with my smartphone will move in cycles. Sometimes I’ll be happy with my usage, then I’ll lapse, then I’ll take another Moment course or try another screen time app, and hopefully get back on track. In 2018, however, the conversation around screen time finally gained some desperately needed urgency (and in the meantime, I’ve actually completed some knitting projects instead of just thumbing my way through #knittersofinstagram).

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

Amazon “seized and destroyed” 2 million counterfeit products in 2020

Published

on

Enlarge / Amazon trailers backed into bays at a distribution center in Miami, Florida, in August 2019.

Amazon “seized and destroyed” over 2 million counterfeit products that sellers sent to Amazon warehouses in 2020 and “blocked more than 10 billion suspected bad listings before they were published in our store,” the company said in its first “Brand Protection Report.”

In 2020, “we seized and destroyed more than 2 million products sent to our fulfillment centers and that we detected as counterfeit before being sent to a customer,” Amazon’s report said. “In cases where counterfeit products are in our fulfillment centers, we separate the inventory and destroy those products so they are not resold elsewhere in the supply chain,” the report also said.

Third-party sellers can also ship products directly to consumers instead of using Amazon’s shipping system. The 2 million fakes found in Amazon fulfillment centers would only account for counterfeit products from sellers using the “Fulfilled by Amazon” service.

The counterfeit problem got worse over the past year. “Throughout the pandemic, we’ve seen increased attempts by bad actors to commit fraud and offer counterfeit products,” Amazon VP Dharmesh Mehta wrote in a blog post yesterday.

Counterfeiting is a longstanding problem on Amazon. Other problems on Amazon that harm consumers include the sale of dangerous products, fake reviews, defective third-party goods, and the passing of bribes from unscrupulous sellers to unscrupulous Amazon employees and contractors. One US appeals court ruled in 2019 that Amazon can be held responsible for defective third-party goods, but Amazon has won other similar cases. Amazon is again arguing that it should not be held liable for a defective third-party product in a case before the Texas Supreme Court that involves a severely injured toddler.

Amazon tries to reassure legit sellers

Amazon’s new report was meant to reassure legitimate sellers that their products won’t be counterfeited. While counterfeits remain a problem for unsuspecting Amazon customers, the e-commerce giant said that “fewer than 0.01 percent of all products sold on Amazon received a counterfeit complaint from customers” in 2020. Of course, people may buy and use counterfeit products without ever realizing they are fake or without reporting it to Amazon, so that percentage may not capture the extent of the problem.

Amazon’s report on counterfeits describes extensive systems and processes to determine which sellers can do business on Amazon. While Amazon has argued in court that it is not liable for what third parties sell on its platform, the company is monitoring sellers in an effort to maintain credibility with buyers and legitimate sellers.

Amazon said it “invested over $700 million and employed more than 10,000 people to protect our store from fraud and abuse” in 2020, adding:

We leverage a combination of advanced machine learning capabilities and expert human investigators to protect our store proactively from bad actors and bad products. We are constantly innovating to stay ahead of bad actors and their attempts to circumvent our controls. In 2020, we prevented over 6 million attempts to create new selling accounts, stopping bad actors before they published a single product for sale, and blocked more than 10 billion suspected bad listings before they were published in our store.

“This is an escalating battle with criminals that attempt to sell counterfeits, and the only way to permanently stop counterfeiters is to hold them accountable through litigation in the court system and through criminal prosecution,” Amazon also said. “In 2020, we established a new Counterfeit Crimes Unit to build and refer cases to law enforcement, undertake independent investigations or joint investigations with brands, and pursue civil litigation against counterfeiters.”

Amazon said it now “report[s] all confirmed counterfeiters to law enforcement agencies in Canada, China, the European Union, UK, and US.” Amazon also urged governments to “increase prosecution of counterfeiters, increase resources for law enforcement fighting counterfeiters, and incarcerate these criminals globally.”

Stricter seller-verification system

Amazon said it had a “new live video and physical address verification” system in place in 2020 in which “Amazon connects one-on-one with prospective sellers through a video chat or in person at an Amazon office to verify sellers’ identities and government-issued documentation.” Amazon said it also “verifies new and existing sellers’ addresses by sending information including a unique code to the seller’s address.”

Most new attempts to register as a seller were apparently fraudulent, as Amazon said that “only 6 percent of attempted new seller account registrations passed our robust verification processes and listed products.” Overall, Amazon “stopped over 6 million attempts to create a selling account before they were able to publish a single listing for sale” in 2020, more than double “the 2.5 million attempts we stopped in 2019,” Amazon said.

The verification process isn’t enough on its own to stop all new fraudulent sellers, so Amazon said it performs “continuous monitoring” of sellers to identify new risks. “If we identify a bad actor, we immediately close their account, withhold funds disbursement, and determine if this new information brings other related accounts into suspicion. We also determine if the case warrants civil or criminal prosecution and report the bad actor to law enforcement,” Amazon said.

Amazon monitors product detail changes for fraud

One problem we wrote about a few months ago involves “bait-and-switch reviews” in which sellers trick Amazon into displaying reviews for unrelated products to get to the top of Amazon’s search results. In one case, a $23 drone with 6,400 reviews achieved a five-star average rating only because it had thousands of reviews for honey. At some point, the product listing had changed from a food item to a tech product, but the reviews for the food product remained. After a purging of the old reviews, that same product page now lists just 348 ratings at a 3.6-star average.

Amazon is trying to prevent recurrences of this problem, saying in its new report that it scans “more than 5 billion attempted changes to product detail pages daily for signs of potential abuse.”

Amazon also provides self-service tools to companies to help them block counterfeits of their products. Amazon’s report said that 18,000 brands have enrolled in “Project Zero,” which “provides brands with unprecedented power by giving them the ability to directly remove listings from our store.” The program also has an optional product serialization feature that lets sellers put unique codes on their products or packaging.

The self-service tool only accounts for a tiny percentage of blocked listings. “For every 1 listing removed by a brand through our self-service counterfeit removal tool, our automated protections removed more than 600 listings through scaled technology and machine learning that proactively addresses potential counterfeits and stops those listings from appearing in our store,” Amazon said.

Continue Reading

Biz & IT

Hackers who shut down pipeline: We don’t want to cause “problems for society”

Published

on

Enlarge / Problems with Colonial Pipeline’s distribution system tend to lead to gasoline runs and price increases across the US Southeast and Eastern seaboard. In this September 2016 photo, a man prepared to refuel his vehicle after a Colonial leak in Alabama.

On Friday, Colonial Pipeline took many of its systems offline in the wake of a ransomware attack. With systems offline to contain the threat, the company’s pipeline system is inoperative. The system delivers approximately 45 percent of the East Coast’s petroleum products, including gasoline, diesel fuel, and jet fuel.

Colonial Pipeline issued a statement Sunday saying that the US Department of Energy is leading the US federal government response to the attack. “[L]eading, third-party cybersecurity experts” engaged by Colonial Pipeline itself are also on the case. The company’s four main pipelines are still down, but it has begun restoring service to smaller lateral lines between terminals and delivery points as it determines how to safely restart its systems and restore full functionality.

Colonial Pipeline has not publicly said what was demanded of it or how the demand was made. Meanwhile, the hackers have issued a statement saying that they’re just in it for the money.

Regional emergency declaration

In response to the attacks on Colonial Pipeline, the Biden administration issued a Regional Emergency Declaration 2021-002 this Sunday. The declaration provides a temporary exemption to Parts 390 through 399 of the Federal Motor Carrier Safety Regulations, allowing alternate transportation of petroleum products via tanker truck to relieve shortages related to the attack.

The emergency declaration became effective immediately upon issuance Sunday and remains in effect until June 8 or until the emergency ends, whichever is sooner. Although the move will ease shortages somewhat, oil market analyst Gaurav Sharma told the BBC the exemption wouldn’t be anywhere near enough to replace the pipeline’s missing capacity. “Unless they sort it out by Tuesday, they’re in big trouble,” said Sharma, adding that “the first areas to hit would be Atlanta and Tennessee, then the domino effect goes up to New York.”

Russian gang DarkSide believed responsible for attack

Unnamed US government and private security sources engaged by Colonial have told CNN, The Washington Post, and Bloomberg that the Russian criminal gang DarkSide is likely responsible for the attack. DarkSide typically chooses targets in non-Russian-speaking countries but describes itself as “apolitical” on its dark web site.

Infosec analyst Dmitry Smilyanets tweeted a screenshot of a statement the group made this morning, apparently concerning the Colonial Pipeline attack:

NBC News reports that Russian cybercriminals frequently freelance for the Kremlin—but indications point to a cash grab made by the criminals themselves this time rather than a state-sponsored attack.

Dmitri Alperovitch, a co-founder of infosec company CrowdStrike, claims that direct Russian state involvement hardly matters at this point. “Whether they work for the state or not is increasingly irrelevant, given Russia’s obvious policy of harboring and tolerating cybercrime,” he said.

DarkSide “operates like a business”

This sample threat was posted to DarkSide's dark web site in 2020, detailing attacks made on a threat management company.
Enlarge / This sample threat was posted to DarkSide’s dark web site in 2020, detailing attacks made on a threat management company.

London-based security firm Digital Shadows said in September that DarkSide operates like a business and described its business model as “RaaC”—meaning Ransomware-as-a-Corporation.

In terms of its actual attack methods, DarkSide doesn’t appear to be very different from smaller criminal operators. According to Digital Shadows, the group stands out due to its careful selection of targets, preparation of custom ransomware executables for each target, and quasi-corporate communication throughout the attacks.

DarkSide claims to avoid targets in medical, education, nonprofit, or governmental sectors—and claims that it only attacks “companies that can pay the requested amount” after “carefully analyz[ing] accountancy” and determining a ransom amount based on a company’s net income. Digital Shadows believes these claims largely translate to “we looked you up on ZoomInfo first.”

It seems quite possible that the group didn’t realize how much heat it would bring onto itself with the Colonial Pipeline attack. Although not a government entity itself, Colonial’s operations are crucial enough to national security to have brought down immediate Department of Energy response—which the group certainly noticed and appears to have responded to via this morning’s statement that it would “check each company that our partners want to encrypt” to avoid “social consequences” in the future.

Continue Reading

Biz & IT

Apple brass discussed disclosing 128-million iPhone hack, then decided not to

Published

on

Getty Images

In September 2015, Apple managers had a dilemma on their hands: should, or should they not, notify 128 million iPhone users of what remains the worst mass iOS compromise on record? Ultimately, all evidence shows, they chose to keep quiet.

The mass hack first came to light when researchers uncovered 40 malicious App Store apps, a number that mushroomed to 4,000 as more researchers poked around. The apps contained code that made iPhones and iPads part of a botnet that stole potentially sensitive user information.

128 million infected.

An email entered into court this week in Epic Games’ lawsuit against Apple shows that, on the afternoon of September 21, 2015, Apple managers had uncovered 2,500 malicious apps that had been downloaded a total of 203 million times by 128 million users, 18 million of whom were in the US.

“Joz, Tom and Christine—due to the large number of customers potentially affected, do we want to send an email to all of them?” App Store VP Matthew Fischer wrote, referring to Apple Senior Vice President of Worldwide Marketing Greg Joswiak and Apple PR people Tom Neumayr and Christine Monaghan. The email continued:

If yes, Dale Bagwell from our Customer Experience team will be on point to manage this on our side. Note that this will pose some challenges in terms of language localizations of the email, since the downloads of these apps took place in a wide variety of App Store storefronts around the world (e.g. we wouldn’t want to send an English-language email to a customer who downloaded one or more of these apps from the Brazil App Store, where Brazilian Portuguese would be the more appropriate language).

The dog ate our disclosure

About 10 hours later, Bagwell discusses the logistics of notifying all 128 million affected users, localizing notifications to each users’ language, and “accurately includ[ing] the names of the apps for each customer.”

Alas, all appearances are that Apple never followed through on its plans. An Apple representative could point to no evidence that such an email was ever sent. Statements the representative sent on background—meaning I’m not permitted to quote them—noted that Apple instead published only this now-deleted post.

The post provides very general information about the malicious app campaign and eventually lists only the top 25 most downloaded apps. “If users have one of these apps, they should update the affected app which will fix the issue on the user’s device,” the post stated. “If the app is available on [the] App Store, it has been updated, if it isn’t available it should be updated very soon.”

Ghost of Xcode

The infections were the result of legitimate developers writing apps using a counterfeit copy of Xcode, Apple’s iOS and OS X app development tool. The repackaged tool dubbed XcodeGhost surreptitiously inserted malicious code alongside normal app functions.

From there, apps caused iPhones to report to a command and control server and provide a variety of device information, including the name of the infected app, the app-bundle identifier, network information, the device’s “identifierForVendor” details, and the device name, type, and unique identifier.

XcodeGhost billed itself as faster to download in China, compared with Xcode available from Apple. For developers to have run the counterfeit version, they would have had to click through a warning delivered by Gatekeeper, the macOS security feature that requires apps to be digitally signed by a known developer.

The lack of follow-through is disappointing. Apple has long prioritized the security of the devices it sells. It has also made privacy a centerpiece of its products. Directly notifying those affected by this lapse would have been the right thing to do. We already knew that Google routinely doesn’t notify users when they download malicious Android apps or Chrome extensions. Now we know that Apple has done the same thing.

Stopping Dr. Jekyll

The email wasn’t the only one that showed Apple brass hashing out security problems. A separate one sent to Apple Fellow Phil Schiller and others in 2013 forwarded a copy of the Ars article headlined “Seemingly benign ‘Jekyll’ app passes Apple review, then becomes ‘evil’.”

The article discussed research from computer scientists who found a way to sneak malicious programs into the App Store without being detected by the mandatory review process that’s supposed to automatically flag such apps. Schiller and the other people receiving the email wanted to figure out how to shore up its protections in light of their discovery that the static analyzer Apple used wasn’t effective against the newly discovered method.

“This static analyzer looks at API names rather than true APIs being called, so there’s often the issue of false positives,” Apple senior VP of Internet software and services Eddy Cue wrote. “The Static Analyzer enables us to catch direct accessing of Private APIs, but it completely misses apps using indirect methods of accessing these Private APIs. This is what the authors used in their Jekyll apps.”

The email went on to discuss limitations of two other Apple defenses, one known as Privacy Proxy and the other Backdoor Switch.

“We need some help in convincing other teams to implement this functionality for us,” Cue wrote. “Until then, it is more brute force, and somewhat ineffective.”

Lawsuits involving large companies often provide never-before-seen portals into the inner-workings of the way they and their executives work. Often, as the case is here, those views are at odds with the companies’ talking points. The trial resumes next week.

Continue Reading

Trending