Connect with us

Facebook Messenger ‘Unsend Message’ Button Spotted

Published

on


Photo Credit: Jane Manchun Wong

Facebook had earlier confirmed that the unsend feature will come in the future

Facebook](https://gadgets.ndtv.com/tags/facebook) faced a lot of backlash back in April this year, after it confirmed that it had deleted CEO Mark Zuckerberg’s messages from several recipients’ inboxes on Messenger. In retaliation, the company said that it is looking to roll out an ‘Unsend Message’ feature that will allow all users to recall their sent messages. Finally, six months after the announcement, screenshots of the ‘Unsend Message’ feature in testing have been leaked, giving us hope that the social giant hasn’t forgotten about its promise.

Tipster Jane Manchun Wong has shared screenshots of the ‘Unsend Message’ button prototype, sourced from Facebook Messenger’s Android code. It’s unclear when the company started testing the feature, and there is no word on when it will launch either. The current state of the feature is very buggy, with the button deleting the message only from the user’s Inbox and not at the recipient’s end. Indeed, it’s possible that the code has shipped but isn’t being actively used as part of a test.

Facebook remained extremely vague when asked about the Unsend feature’s launch timeline, but confirmed its arrival sometime in the future. “Though we have nothing to announce today, we have previously confirmed that we intend to ship a feature like this and are still planning to do so,” a company spokesperson told TechCrunch.

As mentioned, there is no clarity on how the feature will work, whenever it launches. Will there be an expiration timer, or will the user be able to unsend messages till a limited time? It’s still uncertain at this point. To recall, Messenger already offers a secret chat feature wherein messages can be timed to self-destruct with durations ranging from 5 seconds up to 1 day. A Facebook Messenger spokesperson has previously said that the only possible implementation would be an expiration timer similar to the secret chat feature, deleting messages after the timer expires.

Other Facebook services like WhatsApp and Instagram also allow users to delete sent messages. WhatsApp offers users with the ability to delete messages for a limited amount of time after the message has been sent. On the other hand, Instagram offers users with the ability to complete unsend a DM (Direct Message) provided the recipient has not seen the message.

<!–

–>

Source link





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Science

Egyptologists translate the oldest-known mummification manual

Published

on

Egyptologists have recently translated the oldest-known mummification manual. Translating it required solving a literal puzzle; the medical text that includes the manual is currently in pieces, with half of what remains in the Louvre Museum in France and half at the University of Copenhagen in Denmark. A few sections are completely missing, but what’s left is a treatise on medicinal herbs and skin diseases, especially the ones that cause swelling. Surprisingly, one section of that text includes a short manual on embalming.

For the text’s ancient audience, that combination might have made sense. The manual includes recipes for resins and unguents used to dry and preserve the body after death, along with explanations for how and when to use bandages of different shapes and materials. Those recipes probably used some of the same ingredients as ointments for living skin, because plants with antimicrobial compounds would have been useful for preventing both infection and decay.

New Kingdom embalming: More complicated than it used to be

The Papyrus Louvre-Carlsberg, as the ancient medical text is now called, is the oldest mummification manual known so far, and it’s one of just three that Egyptologists have ever found. Based on the style of the characters used to write the text, it probably dates to about 1450 BCE, which makes it more than 1,000 years older than the other two known mummification texts. But the embalming compounds it describes are remarkably similar to the ones embalmers used 2,000 years earlier in pre-Dynastic Egypt: a mixture of plant oil, an aromatic plant extract, a gum or sugar, and heated conifer resin.

Although the basic principles of embalming survived for thousands of years in Egypt, the details varied over time. By the New Kingdom, when the Papyrus Louvre-Carlsberg was written, the art of mummification had evolved into an extremely complicated 70-day-long process that might have bemused or even shocked its pre-Dynastic practitioners. And this short manual seems to be written for people who already had a working knowledge of embalming and just needed a handy reference.

“The text reads like a memory aid, so the intended readers must have been specialists who needed to be reminded of these details,” said University of Copenhagen Egyptologist Sofie Schiødt, who recently translated and edited the manual. Some of the most basic steps—like using natron to dry out the body—were skipped entirely, maybe because they would have been so obvious to working embalmers.

On the other hand, the manual includes detailed instructions for embalming techniques that aren’t included in the other two known texts. It lists ingredients for a liquid mixture—mostly aromatic plant substances like resin, along with some binding agents—which is supposed to coat a piece of red linen placed on the dead person’s face. Mummified remains from the same time period have cloth and resin covering their faces in a way that seems to match the description.

Royal treatment

“This process was repeated at four-day intervals,” said Schiødt. In fact, the manual divides the whole embalming process into four-day intervals, with two extra days for rituals afterward. After the first flurry of activity, when embalmers spent a solid four days of work cleaning the body and removing the organs, most of the actual work of embalming happened only every fourth day, with lots of waiting in between. The deceased spent most of that time lying covered in cloth piled with layers of straw and aromatic, insect-repelling plants.

For the first half of the process, the embalmers’ goal was to dry the body with natron, which would have been packed around the outside of the corpse and inside the body cavities. The second half included wrapping the body in bandages, resins, and unguents meant to help prevent decay.

The manual calls for a ritual procession of the mummy every four days to celebrate “restoring the deceased’s corporeal integrity,” as Schiødt put it. That’s a total of 17 processions spread over 68 days, with two solid days of rituals at the end. Of course, most Egyptians didn’t get such elaborate preparation for the afterlife. The full 70-day process described in the Papyrus Louvre-Carlsberg would have been mostly reserved for royalty or extremely wealthy nobles and officials.

A full translation of the papyrus is scheduled for publication in 2022.

Continue Reading

Cars

Waymo recreated fatal crashes putting its software at the wheel – Here’s how it did

Published

on

Waymo is tackling the safety issue of autonomous vehicles head-on, using simulations to replay fatal crashes but replacing the human driver involved with the Alphabet company’s software, to show what the Waymo Driver would’ve done differently. The research looked at every fatal accident recorded in Chandler, Arizona – where the Waymo One driverless car-hailing service currently operates – between 2008 and 2017.

“We excluded crashes that didn’t match situations that the Waymo Driver would face in the real world today, such as when crashes occurred outside of our current operating domain,” Trent Victor, Director of Safety Research and Best Practices at Waymo, explains. “Then, the data was used to carefully reconstruct each crash using best-practice methods. Once we had the reconstructions, we simulated how the Waymo Driver might have performed in each scenario.”

In total, there were 72 different simulations that the system needed to handle. In those where there were two cars involved, Waymo modeled each in two ways. First, where the Waymo Driver was in control of the “initiator” vehicle, which initiated the crash, and then again with it as the “responder” vehicle, which responds to the initiator’s actions. That took the total to 91 simulations.

The Waymo Driver avoided every crash as initiator – a total of 52 simulations – Waymo says. That was mainly down to the computer following the rules of the road that human drivers in the actual crashes did not, such as avoiding speeding, maintaining a gap with other traffic, and not running through red lights or failing to yield appropriately.

On the flip side, where the Waymo Driver was the responder, it managed to avoid 82-percent of the crashes in the simulations. According to Waymo’s Victor, “in the vast majority of events, it did so with smooth, consistent driving – without the need to brake hard or make an urgent evasive response.”

In a further 10-percent of the simulations, the Waymo Driver was able to take action to mitigate the crash’s severity. There, the driver was 1.3-15x less likely to sustain a serious injury, Waymo calculates.

Finally, in the remaining 8-percent of crashes simulated, the Waymo Driver was unable to mitigate or avoid the impact. They were all situations where a human-operated vehicle struck the back of a Waymo vehicle that was stationary or moving at a constant speed, this “giving the Waymo Driver little opportunity to respond,” Victor explains.

That is equally important, Waymo argues, because when they finally launch in any significant number, autonomous vehicles are going to have to coexist with human drivers on the road for some time to come. Those human drivers can’t be counted on to follow the same rules as stringently as Waymo’s software demands.

Waymo has released a paper, detailing its findings. Part of the challenge for assessing autonomous vehicles, it argues, is that high-severity collisions are thankfully relatively rare in the real world. As such, “evaluating effectiveness in these scenarios through public road driving alone is not practical given the gradual nature of ADS deployments.”

Continue Reading

Science

Programmable optical quantum computer arrives late, steals the show

Published

on

Excuse me a moment—I am going to be bombastic, overexcited, and possibly annoying. The race is run, and we have a winner in the future of quantum computing. IBM, Google, and everyone else can turn in their quantum computing cards and take up knitting.

OK, the situation isn’t that cut and dried yet, but a recent paper has described a fully programmable chip-based optical quantum computer. That idea presses all my buttons, and until someone restarts me, I will talk of nothing else.

Love the light

There is no question that quantum computing has come a long way in 20 years. Two decades ago, optical quantum technology looked to be the way forward. Storing information in a photon’s quantum states (as an optical qubit) was easy. Manipulating those states with standard optical elements was also easy, and measuring the outcome was relatively trivial. Quantum computing was just a new application of existing quantum experiments, and those experiments had shown the ease of use of the systems and gave optical technologies the early advantage.

But one key to quantum computing (or any computation, really) is the ability to change a qubit’s state depending on the state of another qubit. This turned out to be doable but cumbersome in optical quantum computing. Typically, a two- (or more) qubit operation is a nonlinear operation, and optical nonlinear processes are very inefficient. Linear two-qubit operations are possible, but they are probabilistic, so you need to repeat your calculation many times to be sure you know which answer is correct.

A second critical feature is programmability. It is not desirable to have to create a new computer for every computation you wish to perform. Here, optical quantum computers really seemed to fall down. An optical quantum computer could be easy to set up and measure, or it could be programmable—but not both.

In the meantime, private companies bet on being able to overcome the challenges faced by superconducting transmon qubits and trapped ion qubits. In the first case, engineers could make use of all their experience from printed circuit board layout and radio-frequency engineering to scale the number and quality of the qubits. In the second, engineers banked on being able to scale the number of qubits, already knowing that the qubits were high-quality and long-lived.

Optical quantum computers seemed doomed.

Future’s so bright

So, what has changed to suddenly make optical quantum computers viable? The last decade has seen a number of developments. One is the appearance of detectors that can resolve the number of photons they receive. All the original work relied on single-photon detectors, which could detect light/not light. It was up to you to ensure that what you were detecting was a single photon and not a whole stream of them.

Because single-photon detectors can’t distinguish between one, two, three, or more photons, quantum computers were limited to single-photon states. Complicated computations would require many single photons that all need to be controlled, set, and read. As the number of operations goes up, the chance of success goes down dramatically. Thus, the same computation would have to be run many many times before you could be sure of the right answer.

By using photon-number-resolving detectors, scientists are no longer limited to states encoded in a single photon. Now, they can make use of states that make use of the photon number. In other words, a single qubit can be in a superposition state of containing a different number of photons zero, one, two and so on, up to some maximum number. Hence, fewer qubits can be used for a computation.

A second key development was integrated optical circuits. Integrated optics have been around for a while, but they have not exactly had the precision and reliability of their electronic counterparts. That has changed. As engineers got more experience in working with the fabrication techniques and with the design requirements for optical circuits, performance has gotten much, much better. Integrated optics are now commonly used in telecommunications industry, with the scale and reliability that that implies.

As a result of these developments, the researchers were simply able to design and order their quantum optical chip from a fab, something unthinkable less than a decade ago. So, in a sense, this is a story that is 20 years in the making of the underlying technology.

Putting the puzzle together

The researchers, from a startup called Xanadu and the National Institute of Standards, have pulled together these technology developments to produce a single integrated optical chip that generates eight qubits. Calculations are performed by passing the photons through a complex circuit made up of Mach-Zehnder interferometers. In the circuit, each qubit interferes with itself and some of the other qubits at each interferometer.

As each qubit exits an interferometer, the direction it takes is determined by the its state and the internal setting of the interferometer. The direction it takes will determine which interferometer it moves to next and, ultimately, where it exits the device.

The internal setting of the interferometer is the knob that the programmer uses to control the computation. In practice, the knob just changes the temperature of individual waveguide segments. But the programmer doesn’t have to worry about these details. Instead, they have an application programming interface (Strawberry Fields Python Library) that takes very normal-looking Python code. This code is then translated by a control system that maintains the correct temperature differentials on the chip.

The company’s description of its technology.

To demonstrate that their chip was flexible, the researchers performed a series of different calculations. The first calculation basically let the computer simulate itself—how many different states can we generate in a given time. (This is the sort of calculation that causes me to grind my teeth because any quantum device can efficiently calculate itself.) However, after that, the researchers got down to business. They calculated the vibrational states of ethylene—two carbon atoms and two hydrogen atoms—and the more complicated phenylvinylacetylene—the favorite child’s name for 2021—successfully. These carefully chosen examples fit beautifully within the eight-qubit space of the quantum computer.

The third computation involved computing graph similarity. I must admit to not understanding graph similarity, but I think it is a pattern-matching exercise, like facial recognition. These graphs were, of course, quite simple, but again, the machine performed well. According to the authors, this was the first such demonstration of graph similarity on a quantum computer.

Is it really done and dusted?

All right, as I warned you, my introduction was exaggerated. However, this is a big step. There are no large barriers to scaling this same computer to a bigger number of qubits. The researchers will have to reduce photon losses in their waveguides, and they will have to reduce the amount of leakage from the laser that drives everything (currently it leaks some light into the computation circuit, which is very undesirable). The thermal management will also have to be scaled. But, unlike previous examples of optical quantum computers, none of these are “new technology goes here” barriers.

What is more, the scaling does not present huge amounts of increased complexity. In superconducting qubits, each qubit is a current loop in a magnetic field. Each qubit generates a field that talks to all the other qubits all the time. Engineers have to take a great deal of trouble to decouple and couple qubits from each other at the right moment. The larger the system, the trickier that task becomes. Ion qubit computers face an analogous problem in their trap modes. There isn’t really an analogous problem in optical systems, and that is their key advantage.

Nature, 2020, DOI: 10.1038/s41586-021-03202-1(About DOIs)

Continue Reading

Trending