Connect with us

Gaming

“Stonehenge Lego” scale model reveals the pagan monument’s unique soundscape

Published

on

Enlarge / Acoustic research using a scale model 1/12th the size of Stonehenge finds that the completed monument would have magnified speech and improved musical sounds, but only for those inside the stone circle.

Scientists built a scale model of Stonehenge, the famous megalithic structure of stones in Wiltshire, England, and used it to recreate how sound would have been reflected off the surfaces of the stones. They found that the arrangement of the stones likely would have amplified speech and enhanced music but only if one was within the circle, according to a recent study in the Journal of Archaeological Science.

Dubbed “Stonehenge Lego,” the scale model is the work of acoustical engineer Trevor Cox of the University of Salford in England and several colleagues. (Fun fact: way back in 2007/2008, Cox conducted a yearlong study to identify the top 10 worst sounds. The sound of someone vomiting topped the list, followed by microphone feedback, wailing babies, and a train scraping along the track rails.) This latest paper builds on their preliminary findings last year. They’ve since been working on testing the acoustics of different configurations of the stones that would have existed at different times in the monument’s long history.

Recreating historical “soundscapes” is part of a relatively young field known as acoustic archaeology (or archaeoacoustics). For instance, researchers have sought to understand how acoustics may have influenced the outcome of key Civil War battles, like the Battle of Seven Pines on May 31, 1862. Another effect of interest to acoustic archaeologists is the chirping sound—reminiscent of the call of the quetzal, a brightly colored exotic bird native to the region—when you clap your hands at the bottom of one of the massive staircases of the Mayan Temple of Kukulkan at Chichen Itza in central Mexico.

These soundscapes can have practical applications. Case in point: last year, the famed Notre Dame Cathedral in Paris was badly damaged in a fire. Fortunately, French acousticians had made detailed measurements of Notre Dame’s “soundscape” over the last few years. Another scientist, Andrew Tallon, had used laser scanning to create precisely detailed maps of the interior and exterior. All that data will be instrumental in helping architects factor acoustics into their reconstruction plans, in hopes of preserving the cathedral’s unique soundscape.

Stonehenge, too, has been known to exhibit unusual acoustic effects; it hums in strong winds, for instance. And in 2017, researchers from London’s Royal College of Art found that its igneous bluestones produce a loud clanging noise when struck, which those who built the monument may have associated with mystical or healing powers. That could explain why some of those stones were transported such long distances; most were likely quarried in a Welsh town called Maenclochog (translation: “ringing rock”). Apparently, the local townspeople used the bluestones as church bells until the 18th century.

Acousticians have measured the properties of the monument as it exists today, but “the sound is very different from the past because so many stones are now missing or displaced,” Cox et al. wrote in their latest paper. There is actually a full-scale model of Stonehenge in Maryhill, Washington, that is closer to its prehistoric formation, and scientists have measured its acoustical properties as well. But Cox et al. believe their scale model more accurately takes into account the shape and size of the actual stones.

“The problem with the other models we have is that the stones aren’t quite the right shape and size, and how the sound interacts with the stones depends critically on the shapes,” Cox told The Guardian last year. “Those blocks at Maryhill are all very rectangular, whereas real Stonehenge, when you look at it, they are all a bit more amorphous because they are made out of stones that have been hand chiselled.”

Acoustical engineer Trevor Cox works with a scale model of Stonehenge in a sound chamber at England's University of Salford.
Enlarge / Acoustical engineer Trevor Cox works with a scale model of Stonehenge in a sound chamber at England’s University of Salford.

Acoustics Research Centre/University of Salford

Cox et al. relied upon laser scans of the site itself, along with existing archaeological evidence, to build their model, which is 1/12th the size of the real thing—as large a replica as the university’s acoustic chamber could accommodate. The outer circle of standing sarsen stones likely numbered 30 originally. Today, there are five standing sarsen stones included in a total of 63 complete stones, along with 12 fragmented stones. Archaeologists have estimated a total of 157 stones were placed at the site some 4,200 years ago.

To make their model, Cox and his colleagues 3D-printed 27 stones of varying shapes and sizes. “You 3D print them and then you make silicon moulds out of them, and then you cast them in a plaster-polymer mix, and then you paint them in car paint,” Cox told The Guardian. “I ruined my dining room floor.” Then they placed the model into the sound chamber to make their measurements.

As we’ve reported previously, there is a definitive relationship between the quality of a room’s acoustics, the size of the chamber, and the amount of absorption surfaces that are present. This is captured in a well-known formula for calculating reverberation time, still the critical factor for gauging a space’s acoustical quality. Reverberation is not the same as an echo, which is what happens when a sound repeats. Reverb is what happens indoors when sound can’t travel sufficient distance to produce those echoing delays. Instead, you get a continuous ring that gradually “decays” (fades).

Acousticians typically measure so-called “impulse responses” on location and store them digitally for later use. Clap your hands inside an empty concert hall or church. That’s the impulse. (A starting pistol or a popping balloon are also good impulses.) The sound reflections you hear are the building’s response. Record both impulse and response, then compare the acoustic profile with a recording of just the impulse for reference, and you can extract a model of the room’s reverberations.

That’s basically what Cox et al. have done with their scale model of Stonehenge. They placed several microphones and speakers throughout the structure, both inside the circle and just outside of it. Then they played chirping sounds of both high and low frequencies through the speakers, and the reflections of the sounds off the model stones were captured and recorded by the microphones.

They found that the reverberation time lasted about 0.6 seconds inside the circle for mid-frequency sounds—ideal for amplifying human speech, or the sounds of musical instruments like drums. (For comparison, your living room probably has a reverb of about 0.4 seconds. A large concert hall typically has a reverberation time of about two seconds, while a cathedral like Notre Dame has a very long reverb of roughly eight seconds.) But those sounds were not projected beyond the circle into the surrounding area, and there were no echoes, since the inner groups of stones served to muddle and scatter the sounds reflected off the outer circle.

Cox and his co-authors are careful to point out that the acoustical properties of Stonehenge were not necessarily the primary driver of the monument’s unique design. “It seems improbable that sound was a primary driver in the design and arrangement of the stones at Stonehenge,” they wrote. “Other considerations were more likely to be important, including the astronomical alignments, the incorporation of two different groups of stones, the replication of similar timber monuments, and the creation of an impressive and awe-inspiring architecture structure.”

This latest study “shows that sound was fairly well contained within the monument and, by implication, [Stonehenge] was fairly well insulated from sounds coming in,” archaeologist Timothy Darvill of Bournemouth University in England—who is not affiliated with the new research—told Science News, adding that the unique acoustical properties “must have been one of the fundamental experiences of Stonehenge.”

DOI: Journal of Archaeological Science, 2020. 10.1016/j.jas.2020.105218 (About DOIs).

Continue Reading

Gaming

The Last of Us’ first PC port is riddled with apparent performance issues

Published

on

PC Shaders go brrrrr
by u/chrysillium in thelastofus

Naughty Dog says it is “actively investigating multiple issues” as complaints about graphical and performance issues continue to flood in following the PC release of The Last of Us: Part 1 on Tuesday.

The thousands of reviews on Steam—67 percent of which are negative, as of this writing—tell the tale of players facing massive problems simply playing the game they purchased. There is an overwhelming number of complaints about everything from frequent crashes and extreme loading times to “severe stuttering” during basic gameplay. Even with some positive reviews on the site supportive of the game’s underlying console versions, others complain that the PC edition is currently “stuttering, crashing, and unplayable.”

Even Joel can’t believe the amount of loading time..
by u/RuneLFox in thelastofus

Many user complaints seem to focus on the extreme amount of time needed for the game to build its graphical shader cache the first time it’s loaded. One Reddit user shared a timelapse of a 70-minute wait for those shaders to compile. Others point out that this extended loading time is particularly significant given Steam’s two-hour playtime window for requesting a no-questions-asked refund.

A “known issues” update on the Naughty Dog support site acknowledges issues with shader loading taking “longer than expected” and stresses that “performance and stability is degraded” while those shaders are loading. The support page also warns players of a “potential memory leak” (which some forum-goers are attributing to a bugged decompression library) and that “older graphics drivers” can also contribute to “instability or graphical problems.”

The Last of Us Part I PC players: we’ve heard your concerns, and our team is actively investigating multiple issues you’ve reported,” Naughty Dog wrote in a tweet Tuesday evening. “We will continue to update you, but our team is prioritizing updates and will address issues in upcoming patches.”

Joel has seen better days…
by u/can_i_see_your_cat in thelastofus

In a blog post accompanying the game’s PC launch Tuesday, Naughty Dog’s Christian Gyrling noted that moving the PS5-optimized Last of Us engine to PC involved “a large amount of tuning, tweaking, and even re-thinking, especially when it came to how we utilized the GPU.” The team was focused on “maintaining the equally high-quality bar across both PC and PlayStation consoles,” Gyrling wrote.

Nixxing Nixxes?

Some eagle-eyed fans started expressing worries about this latest PlayStation-to-PC port earlier this month when an Iron Galaxy logo appeared at the bottom of a PC spec sheet posted on the Naughty Dog blog. Iron Galaxy, you may remember, was responsible for the “seriously broken” port of Batman: Arkham Knight in 2015, which was eventually pulled from Steam amid widespread demands for refunds. Four months later, after multiple patches, players were still reporting massive resource allocation issues with that version of the game.
Iron Galaxy’s apparent involvement is especially notable given Sony’s 2021 purchase of Nixxes, which has been responsible for better-received PC ports of PlayStation titles like Spider-Man and Horizon: Zero Dawn. Then again, Iron Galaxy did work on the PC release of Uncharted: Legacy of Thieves Collection, which Digital Foundry called an “accomplished but unambitious port” upon its release last year.

TLOUP1, Anyone else having glitches like these on PC?
by u/official_tommy_boi in thelastofus

Ironically enough, this week’s PC release came after a 25-day delay that Naughty Dog said at the time was to ensure the “PC debut is in the best shape possible.” Who knows how many more days players will have to wait until the game has truly reached that “best shape possible” status.

Listing image by Reddit / official_tommy_boi

Continue Reading

Gaming

Elemental music: Interactive periodic table turns He, Fe, Ca into Do, Re, Mi

Published

on

Enlarge / Graduate student W. Walker Smith converted the visible light given off by the elements into audio, creating unique, complex sounds for each one. His personal favorites are helium and zinc.

W. Walker Smith and Alain Barker

We’re all familiar with the elements of the periodic table, but have you ever wondered what hydrogen or zinc, for example, might sound like? W. Walker Smith, now a graduate student at Indiana University, combined his twin passions of chemistry and music to create what he calls a new audio-visual instrument to communicate the concepts of chemical spectroscopy.

Smith presented his data sonification project—which essentially transforms the visible spectra of the elements of the periodic table into sound—at a meeting of the American Chemical Society being held this week in Indianapolis, Indiana. Smith even featured audio clips of some of the elements, along with “compositions” featuring larger molecules, during a performance of his “The Sound of Molecules” show.

As an undergraduate, “I [earned] a dual degree in music composition and chemistry, so I was always looking for a way to turn my chemistry research into music,” Smith said during a media briefing. “Eventually, I stumbled across the visible spectra of the elements and I was overwhelmed by how beautiful and different they all look. I thought it would be really cool to turn those visible spectra, those beautiful images, into sound.”

What do the elements sound like?

Data sonification is not a new concept. For instance, in 2018, scientists transformed NASA’s image of Mars rover Opportunity on its 5,000th sunrise on Mars into music. The particle physics data used to discover the Higgs boson, the echoes of a black hole as it devoured a star, and magnetometer readings from the Voyager mission have also been transposed into music. And several years ago, a project called LHCSound built a library of the “sounds” of a top quark jet and the Higgs boson, among others. The project hoped to develop sonification as a technique for analyzing the data from particle collisions so that physicists could “detect” subatomic particles by ear.

Markus Buehler’s MIT lab famously mapped the molecular structure of proteins in spider silk threads onto musical theory to produce the “sound” of silk in hopes of establishing a radical new way to create designer proteins. The hierarchical elements of music composition (pitch, range, dynamics, tempo) are analogous to the hierarchical elements of protein structure. The lab even devised a way for humans to “enter” a 3D spider web and explore its structure both visually and aurally via a virtual reality setup. The ultimate aim is to learn to create similar synthetic spiderwebs and other structures that mimic the spider’s process.

Several years later, Buehler’s lab came up with an even more advanced system of making music out of a protein structure by computing the unique fingerprints of all the different secondary structures of proteins to make them audible via transposition—and then converting it back to create novel proteins never before seen in nature. The team also developed a free Android app called the Amino Acid Synthesizer so users could create their own protein “compositions” from the sounds of amino acids.

So Smith is in good company with his interactive periodic table project. All the elements release distinct wavelengths of light, depending on their electron energy levels, when stimulated by electricity or heat, and those chemical “fingerprints” make up the visible spectra at the heart of chemical spectroscopy. Smith translated those different frequencies of light into different pitches or musical notes using an instrument called the Light Soundinator 3000, scaling down those frequencies to be within the range of human hearing. He professed amazement at the sheer variety of sounds.

“Red light has the lowest frequency in the visible range, so it sounds like a lower musical pitch than violet,” said Smith, demonstrating on a toy color-coded xylophone. “If we move from red all the way up to violet, the frequency of the light keeps getting higher, and so does the frequency of the sound. Violet is almost double the frequency of red light, so it actually sounds close to a musical octave.” And while simpler spectra like hydrogen and helium, which only have a few lines in their spectra, sound like “vaguely musical” chords, elements with more complex spectra consisting of thousands of lines are dense and noisy, often sounding like “a cheesy horror movie effect,” according to Smith.

His favorites: helium and zinc. “If you listen to the frequencies [of helium] one by one instead of all at once, you get an interesting scale pattern that I have used to make a couple of compositions, including a ‘helium dance party,'” said Smith. As for zinc, “The first row of transition metals have very complex, dense grating sounds. But zinc, for whatever reason, despite having a large number of frequencies, sounds like an angelic vocalist singing with vibrato.”

Smith is currently collaborating with the Wonder Lab Museum in Bloomington, Indiana, to develop a museum exhibit that would enable visitors to interact with the periodic table, listen to the laments, and make their own musical compositions from the various sounds. “The main thing I want to [convey] is that science and the arts aren’t so different after all,” he said. “Combining them can lead to new research questions, but also new ways to communicate and reach larger audiences.”

Continue Reading

Gaming

Why Transformers now look like a big bunch of gears and car parts

Published

on

Enlarge / How did one of the rarest 911s end up becoming a Transformer?

Stef Schrader

“I didn’t know what car Mirage was going to be at first,” said Steven Caple Jr., director of Transformers: Rise of the Beasts. “Where I’m from, in Cleveland, Ohio, I’d never even been in a Porsche before,” he continued. “My actual first introduction to Porsche was Bad Boys I, so shout out to Michael Bay—that’s all I really had.”

Caple admitted in a panel during Austin’s South by Southwest festival that the star car of the beloved action film Bad Boys inspired him to make Mirage a classic Porsche in the upcoming film. Mirage is a bit of a rebel himself, and the callback to the classic buddy-cop movie just felt right.

Fortunately, extraterrestrial Autobots won’t be tempted to pull over in any sketchy places to debate the merits of in-car snacking, but this does mean they have bigger nemeses that necessitate transforming into giant robots to handle. It can be more complicated than you’d expect to make a cool Porsche into an Autobot film star, though—in fact, Porsche has a whole team that helps Hollywood studios get just the right car on the silver screen. Here’s how it all comes together.

Character development

It starts with a character. Filmmakers have a certain look and vibe in mind when a new Transformer is “cast,” so to speak. Mirage is a bad boy with an attitude, and the film, set in 1994, is meant to be a sequel to Bumblebee. That made Caple think of the 1994 911 Turbo from Bad Boys.

“I was born in the ’80s, and I was a kid in the ’90s… this is the era when I grew up,” Caple explained. “This movie is like a time capsule to me.”

“You get to ’94, and everything started to change—from the wardrobe to the culture to the music to the cars,” he continued. “You start to step away from square-bodied cars and say, ‘hello curves.'”

You probably have to be pretty into your Porsches to spot that this is a 3.8 RS and not a 911 Turbo.
Enlarge / You probably have to be pretty into your Porsches to spot that this is a 3.8 RS and not a 911 Turbo.

Stef Schrader

The “casting” choice of the 964-era 911—a car that was dramatically smoother and more streamlined than any 911 before it—is a callback for the current Transformers series, given that Bad Boys was Michael Bay’s feature-length directorial debut. Yet Mirage has always been portrayed as an upper-crust member of Autobot society, so it makes sense that the Transformers team picked an even rarer 964-generation Porsche to portray him: a 1993 911 Carrera RS 3.8.

“When I was designing the character, it started there,” Caple said. “I talked to Owen [Shively] and the team at Porsche and said… he’s going to be an outlaw. He’s going to be a rebel. Going to be flashy. Very confident, but smooth.”

That’s when Porsche suggested looking into the 911 Carrera RS 3.8.

The Carrera RS 3.8 uses the same wider body shape as Bad Boys‘ 911 Turbo, but it was a homologation specially produced to legalize the Carrera RSR race car with a host of lightweight parts and a hardcore aerodynamic package designed for track domination. Porsche only ever made 55 RS 3.8s, according to Total 911, making it an exceptionally rare ride. In other media and toys in the past, Mirage has been a Ferrari and a Formula 1 car, so an ultra-rare Porsche feels like a solid fit.

While many of us associate the Transformers series with the heavy use of CGI, the filmmakers still need to source real cars to use for many of the shots—and Porsche has a whole team dedicated to helping filmmakers place just the right car into film and television projects.

Owen Shively, from that early ideation conversation Caple mentioned, is the CEO of RTTM Agency, Porsche Cars North America’s exclusive representative when it comes to entertainment partnership requests like this. When Porsche needs someone to arrange a specific car for a new film or TV project, Shively’s agency is where they turn.

Continue Reading

Trending