Connect with us

Biz & IT

Europe agrees platform rules to tackle unfair business practices

Published

on

The European Union’s political institutions have reached agreement over new rules designed to boost transparency around online platform businesses and curb unfair practices to support traders and other businesses that rely on digital intermediaries for discovery and sales.

The European Commission proposed a regulation for fairness and transparency in online platform trading last April. And late yesterday the European Parliament, Council of the EU and Commission reached a political deal on regulating the business environment of platforms, announcing the accord in a press release today.

The political agreement paves the way for adoption and publication of the regulation, likely later this year. The rules will apply 12 months after that point.

Online platform intermediaries such as ecommerce marketplaces and search engines are covered by the new rules if they provide services to businesses established in the EU and which offer goods or services to consumers located in the EU.

The Commission estimates there are some 7,000 such platforms and marketplaces which will be covered by the regulation, noting this includes “world giants as well as very small start-ups”.

Under the new rules, sudden and unexpected account suspensions will be banned — with the Commission saying platforms will have to provide “clear reasons” for any termination and also possibilities for appeal.

Terms and conditions must also be “easily available and provided in plain and intelligible language”.

There must also be advance notice of changes — of at least 15 days, with longer notice periods applying for more complex changes.

For search engines the focus is on ranking transparency. And on that front dominant search engine Google has attracted more than its fair share of criticism in Europe from a range of rivals (not all of whom are European).

In 2017, the search giant was also slapped with a $2.7BN antitrust fine related to its price comparison service, Google Shopping. The EC found Google had systematically given prominent placement to its own search comparison service while also demoting rival services in search results. (Google rejects the findings and is appealing.)

Given the history of criticism of Google’s platform business practices, and the multi-year regulatory tug of war over anti-competitive impacts, the new transparency provisions look intended to make it harder for a dominant search player to use its market power against rivals.

Changing the online marketplace

The importance of legislating for platform fairness was flagged by the Commission’s antitrust chief, Margrethe Vestager, last summer — when she handed Google another very large fine ($5BN) for anti-competitive behavior related to its mobile platform Android.

Vestager said then she wasn’t sure breaking Google up would be an effective competition fix, preferring to push for remedies to support “more players to have a real go”, as her Android decision attempts to do. But she also stressed the importance of “legislation that will ensure that you have transparency and fairness in the business to platform relationship”.

If businesses have legal means to find out why, for example, their traffic has stopped and what they can do to get it back that will “change the marketplace, and it will change the way we are protected as consumers but also as businesses”, she argued.

Just such a change is now in sight thanks to EU political accord on the issue.

The regulation represents the first such rules for online platforms in Europe and — commissioners’ contend — anywhere in the world.

“Our target is to outlaw some of the most unfair practices and create a benchmark for transparency, at the same time safeguarding the great advantages of online platforms both for consumers and for businesses,” said Andrus Ansip, VP for the EU’s Digital Single Market initiative in a statement.

Elżbieta Bieńkowska, commissioner for internal market, industry, entrepreneurship, and SMEs, added that the rules are “especially designed with the millions of SMEs in mind”.

“Many of them do not have the bargaining muscle to enter into a dispute with a big platform, but with these new rules they have a new safety net and will no longer worry about being randomly kicked off a platform, or intransparent ranking in search results,” she said in another supporting statement.

In a factsheet about the new rules, the Commission specifies they cover third-party ecommerce market places (e.g. Amazon Marketplace, eBay, Fnac Marketplace, etc.); app stores (e.g. Google Play, Apple App Store, Microsoft Store etc.); social media for business (e.g. Facebook pages, Instagram used by makers/artists etc.); and price comparison tools (e.g. Skyscanner, Google Shopping etc.).

The regulation does not target every online platform. For example, it does not cover online advertising (or b2b ad exchanges), payment services, SEO services or services that do not intermediate direct transactions between businesses and consumers.

The Commission also notes that online retailers that sell their own brand products and/or don’t rely on third party sellers on their own platform are also excluded from the regulation, such as retailers of brands or supermarkets.

Where transparency is concerned, the rules require that regulated marketplaces and search engines disclose the main parameters they use to rank goods and services on their site “to help sellers understand how to optimise their presence” — with the Commission saying the aim is to support sellers without allowing gaming of the ranking system.

Some platform business practices will also require mandatory disclosure — such as for platforms that not only provide a marketplace for sellers but sell on their platform themselves, as does Amazon for example.

The ecommerce giant’s use of merchant data remains under scrutiny in the EU. Vestager revealed a preliminary antitrust probe of Amazon last fall — when she said her department was gathering information to “try to get a full picture”. She said her concern is dual platforms could gain an unfair advantage as a consequence of access to merchants’ data.

And, again, the incoming transparency rules look intended to shrink that risk — requiring what the Commission couches as exhaustive disclosure of “any advantage” a platform may give to their own products over others.

“They must also disclose what data they collect, and how they use it — and in particular how such data is shared with other business partners they have,” it continues, noting also that: “Where personal data is concerned, the rules of the GDPR [General Data Protection Regulation] apply.”

(GDPR of course places further transparency requirements on platforms by, for example, empowering individuals to request any personal data held on them, as well as the reasons why their information is being processed.)

The platform regulation also includes new avenues for dispute resolution by requiring platforms set up an internal complaint-handling system to assist business users.

“Only the smallest platforms in terms of head count or turnover will be exempt from this obligation,” the Commission notes. (The exemption limit is set at fewer than 50 staff and less than €10M revenue.)

It also says: “Platforms will have to provide businesses with more options to resolve a potential problem through mediators. This will help resolve more issues out of court, saving businesses time and money.”

But, at the same time, the new rules allow business associations to take platforms to court to stop any non-compliance — mirroring a provision in the GDPR which also allows for collective enforcement and redress of individual privacy rights (where Member States adopt it).

“This will help overcome fear of retaliation, and lower the cost of court cases for individual businesses, when the new rules are not followed,” the Commission argues.

“In addition, Member States can appoint public authorities with enforcement powers, if they wish, and businesses can turn to those authorities.”

One component of the regulation that appears to be being left up to EU Member States to tackle is penalties for non-compliance — with no clear regime of fines set out (as there is in GDPR). So it’s not clear whether the platform regulation might not have rather more bark than bite, at least initially.

“Member States shall need to take measures that are sufficiently dissuasive to ensure that the online intermediation platforms and search engines comply with the requirements in the Regulation,” the Commission writes in a section of its factsheet dealing with how to make sure platforms respect the new rules.

It also points again to the provision allowing business associations or organisations to take action in national courts on behalf of members — saying this offers a legal route to “stop or prohibit non-compliance with one or more of the requirements of the Regulation”. So, er, expect lawsuits.

The Commission says the rules will be subject to review within 18 months after they come into force — in a bid to ensure the regulation keeps pace with fast-paced tech developments.

A dedicated Online Platform Observatory has been established in the EU for the purpose of “monitoring the evolution of the market and the effective implementation of the rules”, it adds.

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

The power of AI compels you to believe this fake image of Pope in a puffy coat

Published

on

Enlarge / An AI-generated photo of Pope Francis wearing a puffy white coat that went viral on social media.

Over the weekend, an AI-generated image of Pope Francis wearing a puffy white coat went viral on Twitter, and apparently many people believed it was a real image. Since then, the puffy pontiff has inspired commentary on the deceptive nature of AI-generated images, which are now nearly photorealistic.

The pope image, created using Midjourney v5 (an AI image synthesis model), first appeared in a tweet by a user named Leon (@skyferrori) on Saturday and quickly began circulating as part of other meme tweets featuring similar images as well, including one that humorously speculates about a pope “lifestyle brand.”

Not long after, Twitter attached a reader-added context warning to the tweet that reads, “This is an AI-generated image of Pope Francis. It is not a genuine photo.

As noted in our piece on last week’s AI-generated Donald Trump arrest photos, Twitter guidelines state that users “may not deceptively share synthetic or manipulated media that are likely to cause harm.” Although in this case, the line between harm and parody might be a fuzzy one.

How do we know the image is fake? Aside from a Reddit post containing alternative images of the Pope from the person that likely made it, The Verge breaks down the evidence fairly well in a piece analyzing the impact of the false image. For example, if you zoom in on details, you’ll see telltale signs of image synthesis in warped details like the pope’s crucifix necklace, the crooked shadow of his glasses, and whatever he is carrying in his hand (a cup?).

But still, upon a quick glance, the false photo (“fauxto”?) looks fairly realistic. And as The Verge notes, a stylish image of Pope Francis plays into our beliefs about the papacy, which often involves wild non-fake imagery—although Pope Francis is known for his “humble” outfits.

A Midjourney journey

The image service used to create the fake photo, Midjourney, debuted last year. Along with DALL-E and Stable Diffusion, it’s one of three major image synthesis models that have become popular online. All three allow users to generate novel images using only text descriptions called “prompts.”

Our experiments with
Enlarge / Our experiments with “Pope Francis in a 1990s white puffer jacket,” created using Midjourney v5.

Midjourney

In this case, the prompt used to create the puffy pope photo might have been as simple as “Pope Francis in a puffy white coat” because Midjourney has made huge leaps in photorealism recently, rendering complex scenes full of details from relatively simple prompts.

What this almost effortless capability to fake photos means for the future of media is still uncertain, but as we’ve speculated before, due to image synthesis, we may never be able to believe what we see online again.

Continue Reading

Biz & IT

Hobbyist builds ChatGPT client for MS-DOS

Published

on

Enlarge / A photo of an IBM PC 5155 portable computer running a ChatGPT client written by Yeo Kheng Meng.

On Sunday, Singapore-based retrocomputing enthusiast Yeo Kheng Meng released a ChatGPT client for MS-DOS that can run on a 4.77 MHz IBM PC from 1981, providing a unique way to converse with the popular OpenAI language model.

Vintage computer development projects come naturally to Yeo, who created a Slack client for Windows 3.1 back in 2019. “I thought to try something different this time and develop for an even older platform as a challenge,” he writes on his blog. In this case, he turned his attention to MS-DOS, a text-only operating system first released in 1981, and ChatGPT, an AI-powered large language model (LLM) released by OpenAI in November.

As a conversational AI model, ChatGPT draws on knowledge scraped from the Internet to answer questions and generate text. Thanks to an API that launched his month, anyone with the programming chops can interface ChatGPT with their own custom application.

Thanks to his new app, which can run on MS-DOS, Yeo can use a vintage IBM PC-compatible computer to chat with ChatGPT over the Internet. It’s a similar back-and-forth conversation as the traditional ChatGPT web interface, albeit as a text-only, full-screen application running on the antique machine.

Development challenges

A photo of an IBM PC 5155 computer running a ChatGPT client written by Yeo Kheng Meng.
Enlarge / A photo of an IBM PC 5155 computer running a ChatGPT client written by Yeo Kheng Meng.

MS-DOS posed a particularly challenging platform for a ChatGPT client, lacking native networking abilities. In addition, Yeo targeted a computer with very limited processing power: a 1984 IBM 5155 Portable PC, which includes an Intel 8088 4.77 MHz CPU, 640KB conventional memory, CGA ISA graphics, and MS-DOS 6.22.

To create the client, Yeo used Open Watcom C/C++, a modern compiler running on Windows 11 that can target 16-bit DOS platforms. For testing purposes, he used a VirtualBox virtual machine running DOS 6.22 to streamline the development process, then he transferred the compiled binary to the target IBM DOS PC for testing.

To handle networking on the IBM PC, Yeo needed to weave his way through several layers. First, Yeo utilized a “Packet Driver API” standard invented in 1983. He integrated the open source MTCP library by Michael B. Brutman into the application to communicate with the Packet Driver, enabling networking capabilities for the client.

For the ChatGPT API, Yeo used OpenAI’s Chat Completion API, constructing the POST request (and parsing the JSON-formatted response) manually in C.

However, Yeo hit a major snag: the ChatGPT APIs require encrypted HTTPS connections. Since there are no native HTTPS libraries for MS-DOS, Yeo had to create an HTTP-to-HTTPS proxy that can run on a modern computer and translate the requests and responses between the MS-DOS client and ChatGPT’s secure API, acting as a transparent middleman in the communication process.

Yeo says that reading and writing input to the console presented another challenge due to the single-threaded nature of DOS applications. He devised a method to check and receive keypresses without pausing the program using the MTCP page and online samples as a reference.

In the end, the client works better than Yeo expected, and he looks forward to more retro challenges in the future: “After experiencing this, I will definitely be writing more retro-software in future,” he writes in a blog post that describes his development process in more detail.

Yeo has released his code (called “doschgpt”) on GitHub if others want to run it themselves—or perhaps improve or extend the code in the future. With a little creativity, the latest tech in AI language models need not be limited to cutting-edge machines.

Continue Reading

Biz & IT

Twitter source code was leaked on GitHub shortly after Musk’s layoff spree

Published

on

Getty Images | Future Publishing

Portions of Twitter’s source code recently appeared on GitHub, and Twitter is trying to force GitHub to identify the user or users who posted the code.

GitHub disabled the repository on Friday shortly after Twitter filed a DMCA (Digital Millennium Copyright Act) takedown notice but apparently hasn’t provided the information Twitter is seeking. Twitter’s DMCA takedown notice asked GitHub to provide the code submitter’s “upload/download/access history,” contact information, IP addresses, and any session information or “associated logs related to this repo or any forks.”

The GitHub user who posted the Twitter source code has the username “FreeSpeechEnthusiast,” possibly a reference to Twitter owner Elon Musk casting himself as a protector of free speech.

“It was unclear how long the leaked code had been online, but it appeared to have been public for at least several months,” a New York Times article said. Despite that, the NYT article said Twitter “executives were only recently made aware of the source code leak.”

GitHub user FreeSpeechEnthusiast’s profile indicates the user joined GitHub on January 3, 2023, and made its only code submission on the same day. Twitter’s DMCA notice to GitHub described the code as “proprietary source code for Twitter’s platform and internal tools.”

Suspect list could include thousands of ex-employees

The leaker may have been one of the roughly 5,500 employees who left Twitter via layoff, firing, or resignation after Musk bought the company. Twitter also reportedly laid off about 5,000 contractors shortly after the Musk acquisition.

“Twitter began an investigation into the leak and executives handling the matter have surmised that whoever was responsible left the San Francisco-based company last year, two people briefed on the internal investigation said,” the NYT wrote.

Musk said on March 17 that Twitter will make “all code used to recommend tweets” open source by March 31, but the leaked code may be much more sensitive. The NYT said its sources indicate that Twitter executives are concerned “that the code includes security vulnerabilities that could give hackers or other motivated parties the means to extract user data or take down the site.”

Twitter sent the takedown notice on Friday and asked a federal court to issue a subpoena later the same day. “The DMCA Subpoena is directed to service provider GitHub,” Twitter’s request for a subpoena said. “GitHub operates a website to which the infringing party or parties (identified by their GitHub username as FreeSpeechEnthusiast) posted various excerpts of Twitter source code, which posting infringes copyrights held by Twitter in those materials.”

Twitter seeks “all identifying information”

Twitter’s proposed subpoena seeks “all identifying information, including the name(s), address(es), telephone number(s), email address(es), social media profile data, and IP address(es), for the user(s) associated with the following GitHub username: FreeSpeechEnthusiast.” It also asks for “all identifying information provided when this account was established, as well as all identifying information provided subsequently for billing or administrative purposes.”

The subpoena request further seeks all identifying information for any “users who posted, uploaded, downloaded or modified the data” at the repository where the Twitter source code was posted.

When contacted by Ars, GitHub did not comment on Twitter’s request for the user’s identifying information or the attempt to obtain a subpoena. “GitHub does not generally comment on decisions to remove content. However, in the interest of transparency, we share every DMCA takedown request publicly,” a GitHub spokesperson said. The Twitter DMCA takedown notice was posted by GitHub here.

GitHub is owned by Microsoft. Another Twitter court filing contains the email thread between Twitter and GitHub that led to the takedown on Friday. It appears that GitHub disabled the repository less than an hour and a half after Twitter filed the takedown notice.

Continue Reading

Trending