Connect with us

Gaming

PC fan port of early Sonic games lets you zoom the camera way, way back

Published

on

A group of coders has decompiled the source code for Sonic the Hedgehog and its 1992 sequel from their well-regarded 2013 smartphone ports. That means these heavily enhanced versions of the early ’90s Genesis games—developed by Christian Whitehead using the same revamped Retro/Star Engine that powers Sonic Mania—can now be easily recompiled for play on new platforms including the PlayStation Vita, the Nintendo Switch, and Windows/Mac computers.

That’s an interesting-enough hacking/coding achievement on its own. But with a little tinkering, the PC versions also let players scale the game window to any arbitrary resolution, expanding the visible playfield without scaling up the games’ core pixel graphics. As you can see in the pictures and videos included in this article, this tweak effectively zooms out the standard in-game camera to show huge chunks of a stage at once, giving players an exciting new perspective on these classic titles.

But how?

Scaled up to 4096×2160, you can see a lot more of Sonic 2 at once. Be sure to extend to full screen for maximum impact.

Filling your PC screen with a playable Sonic map isn’t exactly as simple as dragging the corner of the gameplay window. First, you have to take a legally obtained copy of one of the 2013 Sonic games (which are still available on Google Play and the iOS App Store) and extract the “RSDK” file to your computer (this handy video tutorial can be of assistance there). From there, you can run the precompiled Windows release and edit the settings file to extend the playfield horizontally with relative ease (you can also edit the pixel scale if you want to effectively zoom the game’s camera back in on a large monitor).

Unfortunately, the game’s vertical height remains hardcoded at 240 pixels in this build, which means the game looks like a long, thin strip when extended across the width of a modern PC monitor. To extend the playfield vertically, you have to dive into the decompiled source code, change “SCREEN_YSIZE” in the retroengine.hpp, then recompile a fresh new executable (there are some tricky dependencies involved in getting this to work; much thanks to @CodeNameGamma for her assistance in my attempts).

The thousand-foot view

Once you get things working, however, the effect of this “zoomed out” view is immediately striking. The standard 32×48 pixel Sonic sprite becomes a tiny, Where’s Waldo-esque speck on a 2560×1440 monitor (or even tinier if you have a 4K or widescreen display). The new viewpoint lets players see well past the cramped, 320×224 screen area they may be used to on the Genesis, allowing them to take in the scale and design of these massive levels all at once. Hidden paths and secrets that once flew by in a blur become immediately apparent when you can take the ultrahigh-level view of a stage at a glance.

These 2013 mobile ports were originally designed to run at “full screen” resolution on a variety of different smartphones, so the engine handles all this rescaling pretty smoothly on its own. Enemies, moving platforms, and animated background elements all generally work, even if Sonic is thousands of pixels away on the opposite corner of the screen. The in-game physics still work as expected and everything is rendered with pixel-perfect authenticity at 60 frames per second, too (assuming your machine can handle all those pixels at these expanded resolutions).

Still, there are some odd gameplay and visual artifacts when you try to scale a game originally designed for ’90s standard-definition TVs to modern computer resolutions. This is most apparent at the end of many levels, where Sonic can get stuck on a newly obstructive invisible wall and the game hits an infinite loop waiting for him to run off screen. On flat levels, the background tiles and even the level architecture itself can sometimes repeat in a vertical pattern, too. And the AI for Dr. Robotnik’s boss battle also tends to freak out a little bit thanks to the new, much larger playfield.

These issues may get ironed out as hackers continue to tinker with the source code and build new versions of these freshly decompiled games. In the meantime, though, we’ll never look at classic Sonic the same way again.

Continue Reading

Gaming

Secrets of the Whales explores language, social structure of giants of the deep

Published

on

National Geographic photographer Brian Skerry spent three years documenting the cultural lives of whales. His journey is the subject of a new four-part documentary series on Disney+, Secrets of the Whales.

Intrepid film crews tracked various species of whales all over the world, capturing their unique hunting strategies, communication skills, and social structures for Secrets of the Whales, a new four-part documentary series from National Geographic, now streaming on Disney+.

The project started with National Geographic Explorer and photographer Brian Skerry, who spent three years traveling around the globe documenting the culture of five different species of whale: orcas, humpbacks (aka “the singing sensation of the ocean”), belugas, narwhals, and sperm whales. The Massachusetts-born Skerry recalls visiting the beaches of New England as a child and being fascinating by nature documentaries about the ocean. “There was something especially awe-inspiring about whales,” he told Ars. “There are so many secrets. If I spent the rest of my life just [filming] whales, I would be very happy.”

Skerry pitched a one-hour documentary to National Geographic about his project, which turned into four hours when producer, writer, and director Brian Armstrong (Red Rock Films) signed on, along with Oscar-winning director James Cameron as executive producer. “It started off as a photographer profile [of Skerry], but the scope became so big,” Armstrong told Ars. “[We realized] it’s about the whales and their culture—a big breakthrough topic. It’s subtle, but you’ll notice when we do introduce human characters, you’re usually looking out from the whale’s point of view as we get into their world.”

Narrated by actor Sigourney Weaver, the final documentary series uses some of Skerry’s original footage, as well as additional material from subsequent NatGeo shoots. Among the many notable moments, the crew captured a baby sperm whale suckling from its mother (a first); humpbacks on the coast of Australia breaching to communicate with each other; a baby humpback learning how to blow bubbles to create a “bubble net” to corral tasty fish; and the first cross-species adoption ever recorded, as a pod of beluga whales accepts a lone narwhal into their pod.

Ars sat down with Cameron, Skerry, and Armstrong to learn more.

Ars Technica: What is it that drew you to this project? 

James Cameron: First of all, what’s not to love about whales? That’s a no-brainer. But really, it was the challenge and the fascination of maybe finding out something new that cetacea specialists didn’t know. Because if you’d go out there with enough people and put enough cameras out there, and you have enough observation time, you’re going to see behaviors that have never been seen and/or recorded before.

I think the show acts as an intermediary between a body of knowledge that’s already known and a public that might not really understand that whales have culture, that they have language, they have music, they have complex social bonds, they have complex social behaviors. They have these highly active, very high-processing brains, the largest brains on the planet, much larger than ours.

We’re only just beginning to understand how complex their culture is, because they’re not tactical. We’ve got our monkey hands, and we build things, and we love our machines. Whales don’t do it that way. They interact with the same world that we do in a completely non-tactical way. Because we don’t speak their language, it’s only slowly revealed how they’re thinking and how they’re processing. To me, that was a fascinating opportunity, so I didn’t hesitate. When National Geographic started to develop this project, I said, “Hey, guys, I’d love to be involved.”

Ars Technica: There’s always a certain degree of serendipity at play when it comes to documenting nature in the wild. How do you prepare to make sure you’re ready when those rare critical moments occur? 

Brian Skerry: I spent years doing research and talking to scientists, figuring out what the story could be. Where can we go? What time of year? What’s the likelihood of seeing these things? You try to narrow down those odds in your favor. At the end of the day, you have a shot list of things that you hope you can get to tell the story—the bare minimum. But if serendipity works in your favor, you get a stingray dropped next to you in New Zealand, or you get a sperm whale mom nursing its calf and trusting me to get close. So serendipity is everything, but that usually only happens for me if I am able to spend a lot of time. Three years and 24 locations sounds like a lot, but in the whale photography biz, it’s not a lot of time, really.

Brian Armstrong: For most of our sequences, we get it all in one day. But it might take us a month to get that one day. It was a bit like roulette. We went to places where we thought we would most likely be able to at least get some good images of whales and hope that we’d get lucky. We had this young humpback that was learning how to [make a bubble net]; it was trying and not succeeding. We decided to just stick with it and see what would happen over a couple of days. When that little calf finally made its perfectly shaped bubble net, we were just overjoyed with goosebumps. Those are the golden moments that you really hope for in a series.

When we got back into the edit suite, we were like, “How do we lay this out and how do we craft it?” We threw the script out. The whales apparently didn’t read it. They had other things in mind. In a way, the narratives were led by the whales themselves. Let’s see what the whales gave us, and then we’ll craft our stories based on that.

“We threw the script out. The whales apparently didn’t read it.”

Ars Technica: Is there perhaps a risk that we’re anthropomorphizing the whales too much, effectively casting them in our own image?  

Brian Armstrong: Previously, it’s almost been taboo to talk about animals as having emotion, having culture. Darwin spoke about it 100 years ago in Origin of Species, but we kind of ignored that. It’s only more recently that scientists have looked more closely at what these whales are doing. When you see a mother orca carrying around a dead calf for days and days, how do you explain that? You don’t want to anthropomorphize, but it [looks like] mourning. It’s grieving. That opened the door to let us have this emotional connection to them.

James Cameron: As proper cetacean researchers, you have to be very, very careful about not reading too much into the tea leaves. But I think the examples that we’ve documented are pretty resoundingly obvious. I think the danger in anthropomorphizing [whales] is to assume they think like we do, just because they evince behaviors that are similar to our behaviors. They may not. I’m very curious to learn more about whale thought and philosophy and perspective, because they’re non-tactical. They operate in their world in a different way. They don’t build things, so they don’t control their world. They live in harmonious balance with their world. 

The big question to me is “why do they need intelligence?” Sharks have gotten along fine for 250 million years with a very limited set of programs that run very well. And they haven’t pushed up the evolutionary ladder to have complex intelligence, culture, emotion, and so on. Why does that serve the whales so well? Why is intelligence emerging in a non-tactical species? I think we understand the positive feedback loop between language, tool use, [being] bipedal and upright, freeing our hands to do other things. We understand that positive feedback loop that led to us. What’s the positive feedback loop that led to whale intelligence, culture, emotion?

I think that the degree to which we can make a case that they are sentient, emotional, intelligent beings is the degree to which we have the moral requirement to keep them alive, to curb our rapacious behavior with respect to the ocean and make space for them in our world. We’ve grabbed the tiller on the biosphere, for better or worse, but we’re not particularly good stewards yet. We haven’t gained that wisdom. So I think they can teach us a lot.

Continue Reading

Gaming

Analyst: Nintendo says Microsoft’s xCloud streaming isn’t coming to Switch

Published

on

Enlarge / That Android Note Ultra 20 (with removable controller) at the top of the image is the closest you’re gonna get to a Switch-like xCloud streaming experience.

For years now, there have been rumors that Microsoft and Nintendo were planning a major partnership to bring the xCloud game streaming features of Xbox Game Pass to the Nintendo Switch. But now an analyst is citing Nintendo itself as saying that rumored team-up won’t be happening.

Game industry analyst and Astris Advisory Japan founder David Gibson tweeted yesterday that while a Switch/xCloud partnership “would make a lot of sense… I have had Nintendo tell me directly they would not put other streaming services on the Switch.” With Nintendo not offering a comment on the matter to Ars Technica, that kind of secondhand sourcing from an analyst in a position to know might be the best information we get for the time being.

Gibson’s tweet came in response to more speculative tweets from NPD analyst Mat Piscatella explaining why he thought such a partnership would be a good idea. “Nintendo would get a massive content gain and sell millions of incremental Switch, [and] Xbox Cloud would be in front of millions of new potential subscribers,” Piscatella said. In the same tweet, though, Piscatella noted that “none of this means that Xbox Cloud will actually ever make it to Switch… there is a list of reasons why it wouldn’t.” (And no, a Switch in the background of an Xbox livestream probably doesn’t point to any of those reasons in either direction.)

Back in 2019, Game Informer cited unnamed sources in reporting that a “Game Pass on Switch” announcement “could come as soon as this year.” Windows Central’s Jez Corden offered a similar report at the time, saying that he had “been hearing for almost a year that Microsoft was aiming to put Xbox Game Pass on Nintendo Switch and even PlayStation 4.”

That potential collaboration wasn’t as ridiculous as it might have seemed at first glance. It certainly would have fit with Phil Spencer’s December 2018 statement to Gamespot that Game Pass “started on console, it will come to PC, eventually it will come to every device.” XCloud General Manager Catherine Gluckstein followed that statement up in a 2019 interview with Wired, saying that Microsoft had “a vision to bring Project xCloud to every device where people want to play… I wouldn’t rule anything out, I wouldn’t rule anything in at this time.”

On Nintendo’s side, the Switch maker has already partnered with third-party publishers for cloud-based streaming of Resident Evil 7 and Assassin’s Creed Odyssey in Japan and international cloud versions of Hitman 3 and Control on the Switch. These are games that would otherwise be difficult for the underpowered Switch hardware to run natively, a situation that would apply to most of the streaming games on Xbox Game Pass as well.

Former Xbox exclusives like Ori and the Will of the Wisps (originally published by Microsoft Game Studios) and Cuphead have seen native releases on the Switch in recent years, too. And that’s not even mentioning the continued success of Microsoft-owned Minecraft on the Switch.

Alas, for now it seems the connections between Microsoft and Nintendo won’t extend to streaming a copy of Forza Motorsport or Halo on the Switch. In the meantime, at least we can sideload Android onto a hacked Switch and get some unauthorized cloud gaming that way.

Continue Reading

Gaming

New handwriting analysis reveals two scribes wrote one of the Dead Sea Scrolls

Published

on

Enlarge / Photographic reproduction of the Great Isaiah Scroll, the best preserved of the biblical scrolls found at Qumran. It contains the entire Book of Isaiah in Hebrew, apart from some small damaged parts.

Most of the scribes who copied the text contained in the Dead Sea Scrolls were anonymous, as they neglected to sign their work. That has made it challenging for scholars to determine whether a given manuscript should be attributed to a single scribe or more than one, based on unique elements in their writing styles (a study called paleography). Now, a new handwriting analysis of the Great Isaiah Scroll, applying the tools of artificial intelligence, has revealed that the text was likely written by two scribes, mirroring one another’s writing style, according to a new paper published in the journal PLOS ONE.

As we’ve reported previously, these ancient Hebrew texts—roughly 900 full and partial scrolls in all, stored in clay jars—were first discovered scattered in various caves near what was once the settlement of Qumran, just north of the Dead Sea, by Bedouin shepherds in 1946-1947. (Apparently, a shepherd threw a rock while searching for a lost member of his flock and accidentally shattered one of the clay jars, leading to the discovery.) Qumran was destroyed by the Romans, circa 73 CE, and historians believe the scrolls were hidden in the caves by a sect called the Essenes to protect them from being destroyed. The natural limestone and conditions within the caves helped preserve the scrolls for millennia; they date back to between the third century BCE and the first century CE.

Several of the parchments have been carbon dated, and synchrotron radiation—among other techniques—has been used to shed light on the properties of the ink used for the text. Most recently, in 2018, an Israeli scientist named Oren Ableman used an infrared microscope attached to a computer to identify and decipher Dead Sea Scroll fragments stored in a cigar box since the 1950s.

A 2019 study of the so-called Temple Scroll concluded that the parchment has an unusual coating of sulfate salts (including sulfur, sodium, gypsum, and calcium), which may be one reason the scrolls were so well-preserved. And last year, researchers discovered that four fragments stored at the University of Manchester, long presumed to be blank, actually contained hidden text, most likely a passage from the Book of Ezekiel.

The current paper focuses on the Great Isaiah Scroll, one of the original scrolls discovered in Qumran Cave 1 (designated 1QIsa). It’s the only scroll from the caves to be entirely preserved, apart from a few small damaged areas where the leather has cracked off. The Hebrew text is written on 17 sheets of parchment, measuring 24 feet long and around 10 inches in height, containing the entire text of the Book of Isaiah. That makes the Isaiah Scroll the oldest complete copy of the book by about 1,000 years. (The Israel Museum, in partnership with Google, has digitized the Isaiah Scroll along with an English translation as part of its Dead Sea Scrolls Digital Project.)

Most scholars believed that the Isaiah Scroll was copied by a single scribe because of the seemingly uniform handwriting style. But others have suggested that it may be the work of two scribes writing in a similar style, each copying one of the scroll’s two distinct halves. “They would try to find a ‘smoking gun’ in the handwriting, for example, a very specific trait in a letter that would identify a scribe,” said co-author Mladen Popović of the University of Groningen. Popović is also director of the university’s Qumran Institute, dedicated to the study of the Dead Sea Scrolls.

In other words, the traditional paleographic method is inherently subjective and based on a given scholar’s experience. It’s challenging in part because one scribe could have a fair amount of variability in their writing style, so how does one determine what is a natural variation, or a subtle difference indicating a different hand? Further complicating matters, similar handwriting might be the result of two scribes sharing a common training, a sign the scribe was fatigued or injured, or that he changed writing implements.

“The human eye is amazing and presumably takes these levels into account, too. This allows experts to ‘see’ the hands of different authors, but that decision is often not reached by a transparent process,” said Popović. “Furthermore, it is virtually impossible for these experts to process the large amounts of data the scrolls provide.” The Isaiah Scroll, for instance, contains at least 5,000 occurrences of the letter aleph (“a”), making it well-nigh impossible to compare every single aleph by eye. He thought pattern recognition and artificial intelligence techniques would be well suited to the task.

First, Popović and his colleagues—Lambert Schomaker and grad student Maruf Dhali—developed an artificial neural network they could train to separate (“binarize”) the ink of the text from the leather or papyrus on which it was written, ensuring that the digital images precisely preserved the original markings. “This is important because the ancient ink traces relate directly to a person’s muscle movement and are person-specific,” said Schomaker.

They next created two 12×12 self-organizing maps of full-character aleph and bet from the Isaiah Scroll’s pages, each letter formed from multiple instances of similar characters. Such maps are useful for chronological style development analysis. Fraglets (fragmented character shapes) were used instead of full character shapes to achieve more robust results.

The results indicated two different handwriting styles, an outcome that persisted even after the team added extra noise to the data as an additional check. That analysis also showed the second scribe’s handwriting was more variable than that of the first, although the two styles were quite similar, indicating a possible common training.

“We will never know their names. But this feels as if we can finally shake hands with them through their handwriting.”

Finally, Popović et al. created “heat maps” for a visual analysis, incorporating all the variations of a given character throughout the scroll. They used this to create an averaged version of the character for the first 27 and last 27 columns, making it clear to the naked eye that the two averaged characters were different from each other—and hence more evidence of a second scribe copying out the second half of the scroll.

“Now, we can confirm this with a quantitative analysis of the handwriting as well as with robust statistical analyses,” said Popović. “Instead of basing judgment on more-or-less impressionistic evidence, with the intelligent assistance of the computer, we can demonstrate that the separation is statistically significant.”

The authors acknowledge that their analysis doesn’t completely rule out the possibility that the variations are due to a scribe’s fatigue, injury, or a change of pen, but “the more straightforward explanation is that a change in scribes occurred,” they wrote. They also concluded that their study shows the added value that scholars engaged in paleographic research can gain by collaborating with other disciplines.

The next step is to apply their methods to more of the Dead Sea Scrolls. “We are now able to identify different scribes,” said Popović of the significance of their findings. “We will never know their names. But after seventy years of study, this feels as if we can finally shake hands with them through their handwriting.”

DOI: PLOS ONE, 2021. 10.1371/journal.pone.0249769  (About DOIs).

Continue Reading

Trending