Motherboard is an online magazine and video channel dedicated to the intersection of technology, science and humans. It raises its eyebrows at the people and things that are making our weird and wonderful present and future, with news, commentary, in-depth reporting, photos, and original video documentaries.
Several weeks ago you may have seen Tesla and SpaceX CEO Elon Musk selling flamethrowers on the internet to raise money for his Boring Company infrastructure project. Needless to say these inferno bringers sold like wildfire. It took only four days for Musk to sell 20,000 of the flamethrowers for $500 each.
While Musk’s version may be sold out, this has not stopped some from going the do-it-yourself route.
One of these intrepid pyromaniacs, YouTuber Jason Salerno, uploaded his Boring Company clone earlier this month. Salerno’s version is made of an airsoft gun, a propane torch, a propane tank, a propane extension hose, and one bottle holder extracted from an unsuspecting bicycle.
The creator began by stripping the white airsoft gun clean and removing all the firing mechanisms. He then had to modify the blowtorch to fit in the plastic gun’s handle. He then reinserted all the necessary components to transform the plastic pea shooter to a handheld firebreather. Salerno did not immediately respond to a request for comment from Motherboard.
And last but not least, he added a black stenciled sticker featuring the Boring Company brand name.
Image: Jason Salerno/YouTube
A redditor with the username rainman95135 took Salerno’s video as inspiration and set out to make his own version of the flamethrower. On Reddit, rainman95135 said the materials for his flamethrower cost $189.
Not surprisingly, Musk’s flamethrowers have been met with some opposition. New York Reps. Eliot Engel and Carolyn Maloney introduced bill H.R. 4901 (also known as the “Flamethrowers? Really? Act”) in response, which would ban flamethrowers that shoot fire more than six feet and would "treat flamethrowers like machine guns." The bill would define a flamethrower as:
“Any non stationarity or transportable device designed or intended to ignite and then emit or propel a burning streak of a combustible or flammable substance a distance of at least 6 feet away.”
But it appears that with a little ingenuity, anyone can get their hands on one of these—and for about $300 less than what Musk is selling them for.
Motherboard by Lorenzo Franceschi-bicchierai, Jaso.. - 1d ago
Last year, a vigilante hacker broke into the servers of a company that sells spyware to everyday consumers and wiped their servers, deleting photos captured from monitored devices. A year later, the hacker has done it again.
Thursday, the hacker said he started wiping some cloud servers that belong to Retina-X Studios, a Florida-based company that sells spyware products targeted at parents and employers, but that are also used by people to spy on their partners without their consent.
A Retina-X spokesperson said in an email Thursday that the company hasn’t detected a new data breach since last year. Friday morning, after the hacker told us he had deleted much of Retina-X’s data, the company again said it had not been hacked. But Motherboard confirmed that the hacker does indeed have access to its servers.
Friday, Motherboard created a test account using Retina-X’s PhoneSheriff spyware in order to verify the hacker’s claims. We downloaded and installed PhoneSheriff onto an Android phone and used the phone’s camera to take a photo of our shoes.
“I have 2 photos of shoes,” the hacker told us moments later.
The hacker also described other photos we had on the device, told us the email account we used to register the account, and then deleted the data from our PhoneSheriff account.
“None of this should be online at all,” the hacker told Motherboard, claiming that he had deleted a total of 1 terabyte of data.
“Aside from the technical flaws, I really find this category of software disturbing. In the US, it's mainly targeted to parents,” the hacker said, explaining his motivations for going after Retina-X. “Edward Snowden has said that privacy is what gives you the ability to share with the world who you are on your own terms, and to protect for yourself the parts of you that you're still experimenting with. I don't want to live in a world where younger generations grow up without that right.”
In the first Retina-X data breach last year, the hacker was able to access private photos, messages, and other sensitive data from people who were monitored using one of Retina-X’s products. The private data was stored in containers provided by cloud provider Rackspace. The hacker found the key and credentials to those containers inside the Android app of PhoneSheriff, one of Retina-X’s spyware products. The API key and the credentials were stored in plaintext, meaning the hacker could take them and gain access to the server.
This time, the hacker said the API key was obfuscated, but it was still relatively easy for him to obtain it and break in again. Because he feared another hacker getting in and then posting the private photos online, the hacker decided to wipe the containers again.
Shortly after Motherboard first reported the Retina-X breach in February of last year, a second hacker independently approached us, and said they already had been inside the company’s systems for some time. The hacker provided other internal files from Retina-X, some of which Motherboard verified at the time.
Answering a series of questions about what Retina-X changed after last year’s hack, a spokesperson wrote in an email that “we have been taking steps to enhance our data security measures. Sharing details of security measures could only serve to potentially compromise those efforts.”
“Retina-X Studios is committed to protecting the privacy of its users and we have cooperated with investigating authorities,” the spokesperson wrote. “Unfortunately, as we are well aware, the perpetrators of these egregious actions against consumers and private companies are often never identified and brought to justice.”
At the end of 2016, the hacker gained access to the servers of Retina-X, which makes several spyware products, and started collecting data and moving inside the company’s networks. Weeks later, the hacker shared samples of some of the data he accessed and stole with Motherboard. But he didn’t post any of it online. Instead, he wiped some of the servers he got into, as the company later admitted in February of 2017.
The new alleged hack comes just a few days after the hacker resurfaced online. At the beginning of February, the hacker started to dump online some of the old data he stole from Retina-X in late 2016. The hacker is now using a Mastodon account called “Precise Buffalo” to share screenshots recounting how he broke in, as well as raw data from the breach, though no privata data from victims and targets.
In February of 2017, a Motherboard investigation based on data provided by hackers showed that tens of thousands of people—teachers, construction workers, lawyers parents, jealous lovers—use stalkerware apps. Some of those people use the stalkerware apps to spy on their own partners without their consent, something that is illegal in the United States and is often associated with domestic abuse and violence.
In facing mortality, we'll go to nearly any lengths for answers, for assurance, for cures. And some things may never change. Enjoy. -the Ed
The girl called it nadi pariksha, a pulse examination. The only difference Alsatia could tell from the cursory procedure of a run-of-the-mill MD was that she closed her eyes while gently pressing her slender fingers on Alsatia’s wrist. That and the frieze of blue people on the walls, holding lotus flowers, their long limbs improbably angled.
“‘Nadi’ is Sanskrit,” the girl intoned, releasing Alsatia. “It means motion, flow, vibration. Nadis weave through our physical and spiritual selves. They are the consciousness matrix that supports our physical presence in the world…”
Said with enough calm, Alsatia mused, anything could be made to sound plausible. But the girl had already glided on.
“Ida is associated with the energy of the moon. Also with the feminine. It controls the function of parasympathetic nervous system…”
“Please,” Alsatia began.
The girl paused, smiled, appreciating that Alsatia was not there for philosophy. The muffled sound of kindergarten recess drifted up from outside, shrill squeals forcing their way through the closed windows. Alsatia fancied she could make out snowball fights.
“It is through the nadis that the prana, your life force, circulates. Your ida is blocked. Your prana can no longer flow freely.”
Stepping back out onto the sidewalk the bright winter sunshine made Alsatia blink.
The yogi was not as old as Alsatia expected, but he made up for it with a fixed frown, furrowed brow beneath close-cropped hair. She wondered whether the serious expression was there to counteract the joyfulness of the saffron robes.
He had her lie on a couch, her eyes closed, and asked her questions that meandered like a leaf on a lake blown by the breeze. He asked about money, her home life; desires and pleasure; fears of rejection. A tang of joss-sticks hang in the air masking some other smell; carpet freshener, perhaps. The hum of a heater became a Buddhist Om.
Alsatia snapped to, suddenly realizing that she had no idea of the time, of how long she had been there. She had been counting the flecks in the wallpapered ceiling, talking about daily routine, fatigue.
When she sat up she felt slightly woozy and for a moment, wondered whether this had all been a ruse; a nagging fear that she had been drugged and violated.
“…kundalinī shakti is spiritual energy. It has its roots in the mūlādhāra chakra, but it is slumbering. Awaken the potential that lies within the mūlādhāra chakra and we work our way towards the light of knowledge, attain the rewards of self-realization…”
“What exactly are you telling me?”
“Your chakras are your energy centers. To release the trapped, stale energy you need to unblock your chakras."
The third was a tarot reader. Why were all these healers up at least two flights of stairs, Alsatia wondered. And always impossibly narrow and steep at that.
She had donned the robes of the circus fortune-teller, an attempt to create an air of mysticism, but which only produced an effect of amateur dramatics. Beads and bangles, the trappings of the gypsy. Madame Spisene, she called herself. Outside she was no doubt something more prosaic, a Patti or a Cath.
As each card was turned over a phrase fell from her lips, dark eyes avoiding Alsatia’s gaze, as if weighed down by the headscarves. With neither beginning nor end, a stream of thoughts briefly vocalized before falling back out of earshot.
“…that which is soon to be a memory…” “…you are the secret and the secret is you…” “…the phoenix is rising, you are coming out of the ashes and being reborn…”
Alsatia’s eyes wandered around the room, settling on the bird caged bird that constantly fluttered and settled, fluttered and settled, as if in a cycle of discovering and forgetting its bars. She wondered when she’d be told that she would meet a tall dark stranger, go on a long journey. And then Madame Spisene turned over what looked like grinning incubi sprawled over a spinning wheel. Inked in gothic script at their feet: The Wheel of Fortune. “Fate doesn't turn to greet us, but we turn to greet our Fate.”
And then the wannabe Roma paused, and for the first time sounded thoughtful, no longer sleepwalking through a script. “The wheel represents not only fate but flow. Flow of power, flow of energy.”
Madame Spisene turned over one last card.
And it was the skeleton Death, rictus grin and scythe.
“It’s like death, isn’t it?”
It took Alsatia a moment to remember where she was, to construct the moments up to that point. Who was the laughing man, looming at her, reeling in a cable as she lay back in a reclining chair. All was stark white, like looking into the sun. Details coalesced. A small windowless room, functional. Desks, monitors, technical equipment. Charts. A scribbled-on year planner.
A defaced and dog-eared motivational poster, photocopies of cartoons.
“Alsatia, my dear…”
A friendly, open manner, even amidst the continual nervous movement. He knew her. Or was this an act? A sense that she had only been here a short while came over her, but that they had shared, that she had opened up to him.
“…you have to be realistic about your age.” A pause, a thought. “You’re still readjusting, aren’t you? That’s what I mean. I had to take you offline to run a full diagnostic.”
Apologetically, he held up the snake of cable as he fed it back into a wall-mounted holder into which it sucked itself greedily, ending with only its multi-pin plug dangling. Alsatia felt her stomach for the equivalent socket before the doctor leant in and smoothed the plastiskin back over.
“There are whole generations of androids that have come along since you. Yours was the first that truly emulated the human animal, right down to the nuts and bolts, corpuscles and capillaries. But one day we all show our age.”
“So, what is wrong with me, doctor?”
“I’m not a doctor, I’m just a technician,” he said, spinning a swivel chair from one desk of monitors to another, checking readouts. “A doctor couldn’t do what I do.”
“So, what’s wrong with me?”
He shrugged, waved his hands vaguely. “It’s a capacitance issue. A resistance problem. Circuit outages. Voltage drops. Amperage fluctuations…”
“Energy flow? Blockages?”
The technician smiled, stilled by Alsatia’s assessment. “Yeah. You got it. Blockages to your energy flow. Hard to be more precise.”
Alsatia considered. “What happens now?”
“Well, before you make any hasty decisions you have every right to look for a second opinion.”
A version of this post originally appeared on Tedium, a twice-weekly newsletter that hunts for the end of the long tail.
Today’s startup companies seem to have a certain arc to them: they get some seed funding, they launch, they draw some interest for their good idea, they keep growing, and maybe they become a part of the fabric of our lives … or a part of the fabric of a significantly larger company.
Strangely, 3Dfx didn’t so much draw interest as blow the lid off of a trend that redefined how we think of video games.
Its graphics processing units were just the right technology for their time. And, for that reason, the company was everywhere for a few years … until it wasn’t.
So, what happened—why did 3Dfx turn into a cautionary tale? It's complicated, but today’s Tedium sifts through all the polygons and the shaded textures.
“It was painful to watch 3Dfx slip from the archetypical kick-ass technology startup to where they wound up. I think I would have been happiest to have the PC market divided up between three strong players that all had their act together, but at this point, I'm not too unhappy with the market simplification resulting from 3Dfx exiting.”
— John Carmack, the longtime id Software programming guru who currently serves as Oculus VR’s chief technical officer, discussing the shutdown of 3Dfx in 2000. (He made these comments, somewhat ironically, to Voodoo Extreme, a 3Dfx enthusiast site.) Carmack served on the graphics card maker’s technical advisory board and expressed frustration with the slow pace of the company’s Rampage project.
A Voodoo2 graphics add-on card, as produced by Diamond. In particular, Diamond saw much success working with 3Dfx. Image: Wikimedia Commons
The corporate failures and missed opportunities that set the stage for 3Dfx
Around the time that 3Dfx and other 3D graphics card makers were just getting their footing in the market, the computer maker Silicon Graphics International was helping to supply Nintendo with much of the 3D technology it was using in its Nintendo 64.
SGI had much of the technology to drive the industry forward, but the company was built around ultra-high-end supercomputers, and that left standard PCs out of the equation.
But the thing was, there was a market on the PC—potentially, a big one, one that was only being hinted at by the early success of shareware games like Doom. SGI, which was selling $100,000 development kits to aspiring N64 publishers, was not the right vessel to reach what promised to become a sizable market. Put another way, they were the Wang Computers of 3D graphics—a company riding the prior generation’s horse.
But there were signs something important was coming. In 1995, Microsoft purchased a startup called RenderMorphics, whose technology became the basis of Direct3D, which became a fundamental standard for 3D games on the PC. Around that same time, id Software was knee-deep in development on Quake, which had an engine built from the ground up to generate 3D worlds from even modestly fast Pentium processors. While the earlier Doom is arguably more influential overall, Quake would prove to be something of a killer app, helping to drive interest in 3D graphics cards.
While SGI wasn’t well-suited to take advantage of the market shift toward commodity graphics, its alumni were. And in 1994, three of those alumni—Scott Sellers, Ross Smith, and Gary Tarolli—launched 3Dfx.
The path that got them to the point of creating a startup was a bit messy. A quite in-depth oral history done by the Computer History Museum, which I’ll reference a lot here, notes that a prior offshoot of SGI, Pellucid, eyed the idea of making 3D graphics cards for PCs, and the firm was acquired by Media Vision, a firm known for selling competitors to the Sound Blaster. Media Vision, which often bundled its multimedia tools into a single kit, would have been a fit for what the trio was trying to do.
“So it made a lot of sense to actually build this 3D product as part of Media Vision,” Sellers recalled. “Except there was just one minor problem, which was Media Vision was run by crooks.”
Initially, 3Dfx and its Voodoo technology were focused intently on the arcades, and the company’s big debut came at the 1996 edition of the Electronic Entertainment Expo. The first game that used its GPU technology, believe it or not, was a virtual batter’s box called Home Run Derby. The game was massive and players used a baseball bat to play.
“To play, a batter enters a batting cage and stands at home plate in front of a big screen monitor, awaiting a pitch. A 3-D ball is then hurled towards the batter,” an early press release explained of the game. “As it nears home plate, the batter swings. Interactive Light's proprietary infrared sensors instantly measure the batter's timing to determine speed, angle and orientation. These measurements determine the direction of the ball, and whether or not it will be a home run. The ‘camera’ follows the ball into the field on a hit or into the crowd on a home run.”
Sure, the arcades were a great place to introduce 3D graphics to the world. But the arcade industry was a struggling beast by that point.
And soon, a much larger opening emerged—in the form of RAM prices.
At the beginning of 1996, aftermarket RAM cost around $30 per megabyte, according to a detailed price comparison of Byte magazine ads by retired computer science professor John C. McCallum. (You might remember his analysis from my piece about the RAM shortage of 1988.)
In the two years prior, the price per megabyte actually jumped significantly, but by the end of 1996, the price per megabyte had fallen to just $5.25 per meg. That sharp drop, especially for the extended data output (EDO) RAM preferred at the time, helped make 3D graphics cards much cheaper than they would have been otherwise—and effectively made the products accessible to consumers.
3Dfx was just the company to take advantage of this market opportunity.
A video of what Quake looked like on a Voodoo chipset.
Four reasons 3Dfx’s Voodoo Graphics took the PC world by storm
3Dfx had its own API and successfully pitched it to developers. In the Computer History Museum’s oral history, company co-founder Ross Smith noted that the company was able to take advantage of the fact that most games were being developed in DOS at the time of its release to create its own API, GLide, and sell it to game developers. “And that was very radical for a graphics company to be doing that,” Smith recalled. “Because normally, you would just use whatever API Microsoft published or whatever. And you'd build this hardware and hope for the best. We couldn't do that, because there were no APIs.” This meant all the big games that supported 3D graphics backed 3Dfx.
It won over Quake—and John Carmack. Quake was considered 3Dfx’s initial killer app, per Sellers, but 3Dfx played a direct role in making that win happen. The company took advantage of the fact that id Software made available the ability to extend the game’s rendering capabilities, then showed them to Carmack, who built the Quake engine. “And that was a huge step for us. Because once he saw it, then his mind just started cranking,” Sellers noted in the oral history. “And he very quickly went from being kind of a software rendering purist, to one saying, hey, I don't need to do all this stuff anymore. I can take advantage of the hardware.” The result of winning over Carmack was an instant surge in the company’s sales—as well as helping to influence the industry as a whole.
It focused on 3D—and 3D alone—at first. Rather than adding 2D graphics capabilities to its system and threatening lower performance, 3Dfx basically chose to focus only on 3D graphics at first. This meant that the company would have better performance on its 3D card than companies that tried to tack on 3D functionality after the fact.
It leaned heavily on partnerships with larger companies. Sellers notes that when the company first started selling chips for PC boards, it wasn’t very well known, but it soon collaborated with Creative Labs and Diamond Multimedia, two major expansion board companies, to offer cards based on 3Dfx chips. (Diamond’s Monster3D, in particular, was an iconic graphics board during this era that was based on 3Dfx.) This created branding opportunities that helped boost the company’s recognition.
“Buying into 3Dfx is a very, very smart move for Sega. Not only is the company the best there is at what they do, but the technology is also well-liked by the developing community.”
— A Sega Saturn Magazine take from July 1997 on Sega’s deal with 3Dfx to produce the graphics hardware for the Dreamcast. The agreement was perhaps the most significant in the company’s history up to that point, but it was not to be permanently. Near the end of July 1997, Sega infamously backed out of the deal and went with NEC’s competing PowerVR technology instead. The wound may have been at least partly self-inflicted; Sega was reportedly none too pleased when 3Dfx, ahead of its initial public offering, revealed the existence of the deal in an investor document. Nonetheless, it was the first major knock of many the company would face. (Despite the ultimate failure of the Dreamcast in the market, PowerVR would become a key element of many mobile platforms, serving as something of the ARM chip of the computer graphics world.)
The acquisition that really screwed up a good thing for 3Dfx
In October of 1997, iconic 1990s gaming mag Next Generation, which is still well regarded today even though it hasn’t published in more than 15 years, published an article that seemed to finally assess the fact that this company came out of basically nowhere to dominate the gaming sector: “Is 3Dfx Here to Stay?”
With the recent bruises from the aborted Sega deal, it wasn’t the worst question to ask. Greg Ballard, the CEO of the company who was brought in from Capcom, put a suitably rosy picture on the situation.
“It has taken the industry—our competitors—14 months to even catch up with us, and within several months, we’ll be leaping ahead of them again,” he stated. “That’s what happens when you have a year-to-14-month lead on the industry. And we think our next leap in technology will be an improvement of an order of magnitude in the technology.”
Unfortunately for the company, the room for error was closing. Between 1996 and 1998, 3Dfx struggled to do much in the way of wrong, but it did have its stumbles. Among them was the company’s attempt to combine 2D graphics and 3D graphics on a single chipset type called the Voodoo Rush. The dedicated approach for which it was known made sense in the first generation, but didn’t work so well in integrated form, with the 2D half of the equation suffering. On the other hand, the Voodoo2 line, which continued the dedicated approach, was just as successful as the first model, while offering some of that expected leapfrogging.
However, the company’s leapfrogging capabilities would soon be hobbled by strategic shifts. At the tail end of 1998, the company announced it would acquire STB Systems, a major manufacturer of graphics cards. The result of the merger was that the company would stop selling its chipsets to other companies, acting as an original equipment manufacturer, or OEM, and instead would manufacture its own cards, starting with the Voodoo 3. The approach came at a time when competition from other major competitors, particularly Nvidia and ATI, was heating up, and the decision by 3Dfx to stop acting as an OEM for its latest chips meant that the company suddenly had both a new business model and a ton of competition.
The firm’s initial willingness to supply lots of manufacturers was effective, but it was one factor behind what became a massive glut in the graphics card space. CNET reported that by 1999, there were more than 40 companies producing graphics accelerators, a market complexity that meant any strategic errors would not be tolerated. On one hand, it meant that consumers were benefiting from the high level of competition, and this meant that things were improving on all fronts. On the other hand, this created a treadmill effect, one that 3Dfx was unable to keep up with.
“And so it really was a we're-going-to-do-it-all kind of strategy. And that's a big bet,” 3Dfx’s Scott Sellers recalled in the Computer History Museum’s oral history. “And when—just a little bit of slip as we did—we were a little bit late coming out with some of the next generation products and didn't have the runway to come up with the next generation products, which I think would have been very compelling on the market. But ultimately, we ran out of time.”
The company’s inability to execute on its pipeline, particularly on its next-generation Rampage endeavor, eventually cost it momentum, developer support (John Carmack didn’t exactly sound happy with 3Dfx by the end, did he?), and ultimately its lifeline.
Nvidia, which has emerged as one of the two main players in the graphics processing unit market (ATI, later bought by AMD, was the other), bought out 3Dfx’s intellectual property in late 2000. Nvidia, in a way, got a twofer deal out of the equation—3Dfx had acquired Gigapixel, a firm that had competed with Nvidia for a spot in the Xbox, just a few months before.
3Dfx created the treadmill, but only its competitors could keep up.
Currently, the GPU industry is facing a major shortage, one caused by reasons completely unrelated to the root causes of the GPU flood of the late 1990s.
Back then, a whole lot of companies saw an opportunity to bring high graphics capabilities to home users; today, GPUs are used for things as diverse as cryptocurrency, artificial intelligence, and automobiles. Supply pipelines, once flush with chips and cards, are running dry, with the rising popularity of crypto-mining a main factor.
A video clip featuring Valley of Ra, one of 3Dfx's demos of its rendering technology.
Clearly, none of this stuff was even a glimmer of an idea at the time 3Dfx shook up the GPU market in 1996. But it’s hard not to wonder whether the competitive spirit that the company fostered didn’t help to start off a world where GPU complexity would take so many alternate paths beyond graphics.
When asked if 3Dfx would have been able to take advantage of the general-purpose uses commonly seen with GPUs today, Sellers noted in the company’s oral history that things were just getting to the point where it might have been feasible to think in those terms.
“It was predominantly gaming. We didn't really have the ability to program the chips, per se, what you would need to have that kind of flexibility in terms of the GPU-like capability today,” he noted. “The product that we were working on at the very end before we sold to Nvidia, we did have a separate geometry chip that we were working on that perhaps could have done some of those types of things.”
Clearly, 3Dfx—once an icon of PC gaming—lost the plot at some point, only to be usurped by other companies. But I admit to wondering what might have happened had it been able to stay on that treadmill.
In January, Motherboard reported on a community devoted to deepfakes, fake porn videos of celebrities created with a machine learning algorithm. Less than a week later, several websites where these images were posted started banning deepfakes from their platforms.
One of the most popular platforms for hosting these images, Gfycat, told Motherboard at the time that deepfakes violated its terms of service because they were “objectionable,” and that it was “actively removing this content.”
At the time, a spokesperson for the company told me in an email that they weren’t technically “detecting” deepfakes. This statement was similar to how Reddit, Discord, Twitter and Pornhub each said they’d handle nonconsensual porn: Rely on users to report or use keywords to keep an eye on where these images are popping up on the platform.
Now, Gfycat seems to be taking a more aggressive approach. Wednesday, Gfycat told Wired in detail how it plans to moderate deepfakes going forward. The plan, basically, is to fight AI with AI. It's the most promising response to a new and troubling problem we've seen yet, but that doesn't mean the problem is solved. We've seen similar automated solutions for policing content introduced on platforms like YouTube and Facebook, only to see those solutions undermined by users shortly after.
According to Wired, Maru is the first line of defence against deepfakes. It can see that a fake porn gif of Gal Gadot kind of looks like Gal Gadot, but isn't quite right, and flags it. If Maru isn't quite sure if a gif is fake or not, Angora can search the internet for the video it's sourced from in the same way it already searches for videos to create higher quality gifs. In this case, however, it is also checking to see if the face in the source material matches the face of the gif that may be a deepfake. If the face doesn't match, the AI concludes that the image is altered and, in theory, rejects the fake.
Robots make damaging videos, and other robots chase them down to nuke them off the internet.
This sounds great in theory, but as Wired points out, there are a few scenarios where deepfakes will slip through the cracks. If someone makes a deepfake of a private citizen—think vindictive exes or harassers scraping someone’s private Facebook page—and no images or videos of them appear publicly online, these algorithms won’t be able to find videos, and will categorize it as the original.
Gfycat’s tool, then, is exclusively useful for celebrities and public figures; not a bad step, but not helpful for preventing revenge porn of lesser-known people.
“We assume that people are only creating deepfakes from popular or famous sources,” a Gfycat spokesperson told me. “We consider a video the ‘original source’ when it comes from a trusted place."
I also asked Gfycat about adversarial methods—images altered in imperceptible way to the human eye that fool AI into thinking one thing is another. For examples, images were able to convince an AI that this turtle was actually a rifle (the poor turtle looks nothing like a rifle). This may seem like an advanced technique the average user wouldn't be able to rely on, but a few months ago it was also hard to imagine that anyone with a consumer-grade GPU could create their own, convincing fake porn videos.
”If faked content uses adversarial AI, it may probably fool at least the Angora method with enough work," a Gfycat spokesperson said. "We have not seen the use of adversarial AI in content uploaded to Gfycat, but we expect that Maru would be more resistant to this technique if it leaves research labs.”
Pitting AI-driven moderators against AI-generated videos sounds like a harbinger of the fake news apocalypse. Robots make damaging videos, and other robots chase them down to nuke them off the internet. But as machine learning research becomes more democratized, it’s an inevitable battle—and one that researchers are already entrenched in.
Justus Thies, a postdoctoral researcher at the Technical University of Munich, developed Face2Face—a project that looks a lot like deepfakes in that it swaps faces in real-time, with an incredibly realistic end result:
Thies told me in an email that since he and his colleagues know exactly how powerful these tools can be, they are also working on digital forensics, and looking for new ways to detect fakes.
“With the development of new technologies, also the possibilities of misuse increases,” he said. “I think [deepfakes] is an abuse of technology that has to be banned. But it also demonstrates the need of fraud detection systems that most likely will be based on AI methods.” Face2Face is good, but it still leaves digital artifacts behind, he said. Thies is working on algorithms that detect such artifacts to spot fakes.
It’s a positive early step for all platforms that Gfycat is being proactive about moderating deepfakes, but if the history of the internet teaches us anything, it's that it's impossible to squash objectionable content 100 percent of the time.
On Thursday, TechCrunch reported on at least one unicode symbol that could crash iOS and Mac apps just by a user viewing the character. Naturally, some people may have sent the symbols to individuals in private chats, perhaps to annoy their friends. But others have taken a different approach, and blasted the symbol across social media and other wide spanning apps, potentially crashing devices of many more people—including, it seems, mine.
A Twitter user with the symbol in their screenname ‘liked’ one of my tweets late on Thursday night. Shortly after the notification popped into my feed, my Twitter app on iOS became briefly unresponsive before crashing. When I tried to open the app again, it launched, hung for a few seconds, and then closed. Uninstalling and reinstalling the app temporarily fixed the issue, but the same user liked another of my tweets on Friday morning, causing the whole thing to happen again.
“My friends [have] been complaining about their phone crashing on Twitter,” Amir, the Twitter user, told me in a direct message (I managed to have the conversation while on a non-iOS device).
On Thursday, Twitter released an iOS app update which fixes “a crash that affects users of right-to-left languages such as Arabic and Hebrew,” the update notes read. This update has not addressed the current, ongoing issue, it seems—my iOS app still crashes when viewing the character. (The character is from Telugu, a south-Indian language). Twitter did not immediately respond to a request for comment. According to The Guardian, Apple is working on a patch for its operating systems.
Funnily enough, Amir said he has tried to change his handle back for around 12 hours, but Twitter blocks the name change, presumably so users can’t tweak their screen name too often.
Others have used the symbol on the Uber app, too.
“I keep requesting a ride but all the drivers in the area seem to no longer be on their phones or accepting rides,” pseudonymous security researcher MG tweeted on Thursday, along with a screenshot of an Uber profile containing the offending symbol.
Security researcher Darren Martyn tested what would happen when putting the symbol in the name of a Wi-Fi network. It crashed his Mac’s network application, according to a video he posted to Twitter on Thursday.
Got a tip? You can contact this reporter securely on Signal on +44 20 8133 5190, OTR chat on firstname.lastname@example.org, or email email@example.com.
In TechCrunch’s original tests, the issue was effective across Slack, Mail, Messages, Facebook, Instagram, and Chrome. As TechCrunch reported, software engineers at Aloha Browser originally found the issue, and said Apple is aware of the issue.
The ecological and evolutionary circumstances that drive a species to extinction are manifold and related to one another in complex ways. There’s predation, ecological collapse, and disease, or as may have been the case with most critters around at the end of the Mesozoic era, plain cosmic bad luck. The important point is that what drives a species to extinction is highly dependent on the idiosyncrasies of the organism and its local environment.
Nevertheless, ecologists attempt to model general population dynamics in order to better understand the population fluctuations of a particular species, and why some eventually go extinct. The world is currently experiencing an alarmingly high rate of species extinction, so having accurate insight into this process is more important than ever.
Many ecological models used to assess extinction risk look at the interplay of two main factors: resource availability and the size of the population. Generally speaking, if resources are abundant, the population increases; if resources are scarce, the population decreases.
Implicit in these models is the idea that if resources are abundant, an animal will be able to harvest enough energy from its environment to reproduce. On the flip side, if resources are scarce, the animal will devote these meagre resources to sustaining itself, rather than trying to produce offspring. This theory has seen validation in species ranging from reindeer to zooplankton, which have been observed to delay reproduction in times of scarcity.
The question, then, is how to make this implicit relationship between resource availability and reproduction explicit in order to create a more nuanced and accurate model of extinction risk for a population. A new eco-evolutionary model published in Nature Communications on Tuesday purports to do just that, and also generated some unexpected results in the process, such as the ideal size for a land mammal.
Developed by three scientists at the Santa Fe Institute, a nonprofit research organization specializing in complex adaptive systems, the new nutritional state-structured model (NSM) describes the timescales involved when an organism switches from being a ‘full’ animal capable of reproduction, to a starving animal focused on self-maintenance as a function of resource availability.
According to Justin Yeakel, an UC Merced ecologist who formally did postdoctoral work at the Santa Fe Institute, the NSM took insights from allometry—the science of how an animal’s body size relates to its anatomy and behavior—to derive realistic parameters on how fast a population of land mammals could accumulate or deplete its energy resources based on what’s available in the environment.
Yeakel and his colleagues derived the amount of time it takes for an “average” land mammal to burn its excess energy, which is stored as body fat, by looking at allometric data from around 100 land mammals. The idea of an “average” land mammal sounds a bit strange, since everything from mice to humans to grizzly bears technically fall into this category, but the Santa Fe Institute researchers weren’t trying to model dynamics for a specific mammal. Instead, they were trying to create a general model that would apply broadly to all terrestrial mammals.
“When you boil life down to its essential ingredients, organisms have to reproduce to pass on our genes and we have to have enough energy to reproduce,” Yeakel told me on the phone. “Individual animals do really strange things that our model wouldn't capture. We were looking at average trends over this large class of organisms.”
What Yeakel and his colleagues found in their “simple model” was surprising. In the first place, it reproduced Cope’s Rule, a cornerstone of ecological theory. Cope’s Rule states that in general, animals of the same lineage tend to evolve toward larger body sizes. This trend continues until certain inflection point where evolutionary pressures then begin to move in the opposite direction. Put another way, larger animals have a greater evolutionary advantage.
The reason for this, Santa Fe Institute biologist Chris Kempes told me, is that larger organisms are able to drive resources to lower levels through consumption, while also being able to store more of this energy. Since the larger animals are able to store more consumed resources as energy (read: body fat), they can survive at lower levels of resource availability than animals with smaller body sizes.
Yet animals can’t keep growing forever. At a certain point, there won’t be enough resources available to a population of huge animals to allow them to create the energy stores necessary to survive resource shortages created by their own consumption of these very resources in the first place. In other words, animals of this size would basically be eating themselves into a state of starvation.
The point where these two macro-evolutionary trends meet, the pressure towards larger body sizes and the pressure towards smaller body sizes, is considered the ‘ideal’ body size in the sense that this body size is the most robust against resource-driven extinction. The NSM was able to predict the ideal body size for terrestrial mammals and found that it was about 2.5 times the size of an African elephant, the largest land mammal in the world.
When Yeakel and his colleagues checked the fossil record, it turns out that the largest animal in history—the deinotherium, which lived around 10 million years ago and is a relative to modern elephants—was almost exactly the size their model predicted it would be.
“We didn't really have in mind that this model could predict the things that we ended up predicting,” Yeakel told me. “That made it all the more surprising when we started filling in the gaps and realizing that this meshes pretty well with what we see in nature.”
So why don’t we see land mammals around the ‘ideal’ size predicted by the NSM? According to Yeakel, this is because the model is only focused on starvation and recovery dynamics of mammalian populations, and doesn’t take into account all the other ecological forces acting on animal populations, such as predation or competition from other species.
“We would expect the optimal mass that we calculate to be an upper bound and one that is not often attained in nature,” Yeakel told me. “Organisms are not just constrained by starvation dynamics, so we observe many varieties of body sizes, which are optimal solutions (in a dynamic ever-changing way) to their particular environments and constraints.”
For now, the NSM is limited to terrestrial mammals, which Kempes said were selected because of the large amount of data available about their energetics. Nevertheless, both Kempes and Yeakel told me that the general model could be applied to other types of animals after appropriate changes were made in the parameters and that given the extinction risks faced by aquatic animals, this could be a fruitful research direction.
“There’s a lot of research about energetics and body composition for terrestrial mammals so we felt really confident in those scaling relationships,” Kempes said. “Of course, thinking about aquatic mammals in the same detailed way in the future would be very interesting.”
The US Air Force wants to spend around $8 billion in 2019 on harder-to-destroy satellites and other space equipment as it prepares for a possible orbital showdown with Russia and China.
But the military should also consider talking to its orbital rivals in order to head off conflict, one expert advised.
The 2019 budget for Air Force space equipment and R&D, including new satellite defenses, represents a 9-percent increase over 2018.
The Air Force, which controls most of the military's spacecraft and accounts for the majority of the Defense Department's space spending, warned of other countries' "ability to counter US space superiority."
In recent years both Russia and China have launched miniature satellites whose main job is to inspect damaged spacecraft, but which could also maneuver close to American spacecraft and disable them. Russia and China also possess powerful electronic jammers that can block the signals from GPS satellites, potentially disrupting US forces' navigation and precision bombing.
"We are in a more dangerous security environment than we have seen in a generation,” Maj. Gen. John Pletcher, the Air Force budget director, told reporters when the administration rolled out its spending blueprint on February 12.
Congress ultimately controls federal spending and could add to, or subtract from, the military's budget proposal.
With 2019 funding, the Air Force wants to spend nearly a billion dollars adding "resilience features" to communications and infrared-monitoring satellites currently under construction. Likewise, the flying branch wants to invest $1.5 billion in new GPS satellites with more powerful signals that could be harder for Russia and China to jam.
The resilience features on comms and infrared satellites might include better thrusters, allowing the spacecraft to maneuver more quickly in order to avoid attack. They may also include extra sensors on the spacecraft that act as a sort of orbital home-security system, monitoring the approach of potential assailants, according to James Oberg, an independent space expert and former astronaut.
"Now that autonomous minisatellites can approach other satellites, sometimes without detection from the ground, space-based detection must be installed on the potential targets," Oberg told me. The sensors could include cameras, radars, radio-signal detectors and "sniffers" that can track the energy from other satellites' thrusters, Oberg added.
"There is no down side to hardening space assets against hostile interference," Oberg said. But Laura Grego, a space expert at the Union of Concerned Scientists in Massachusetts, cautioned against relying entirely on technology to protect America's spacecraft.
"Resilience can take you some way to keeping space secure and reliable," Grego told me. "But resilience and planning must be coupled with robust limits on the most dangerous technologies and behaviors in space."
In other words, international agreements and treaties such as the 1967 Outer Space Treaty, which bars weapons from long-term Earth orbit. The Outer Space Treaty has many loopholes. For one, a satellite that has peaceful uses but could also, with the flip of a switch, attack other spacecraft is generally considered legal under the treaty.
Grego advised combining defensive space technology with stronger legal limits on orbital weaponry. "Otherwise you may just be facing other types of threats that aren’t so easily dealt with and will be doing nothing to help keep at bay the risk of space activities sparking crises."
Diplomacy "is necessary for a long-term future in space and I think we shouldn’t waste any time," Grego added. "We should be engaging other countries now rather than later."
Motherboard by Daniel Oberhaus, Kate Lunau, Emanue.. - 2d ago
Considering all the work that went into creating the Star Wars universe, there was comparatively little attention paid to physical realism in the franchise. Arguably the most egregious violation of physical laws in Star Wars is the iconic lightsabers wielded by Jedis. These weapons should be impossible because light particles—called photons—don’t interact with one another in the same way that normal matter does. This is why you and your friends can’t re-enact some epic ‘saber battles with a couple of flashlights. I mean you could, but you’ll just look like a bunch of dinguses.
Research published today in Science gives ‘a new hope’ (I’m so sorry) for those holding out for lightsabers. A team of physicists has created a new form of light that permits up to three photons to bind together. The technology isn’t quite ready to defeat the Dark Side, but it could be a major boon to photon-based quantum computers.
The two lead researchers on the project, MIT physicist Vladan Vuletic and Harvard physicist Mikhail Lukin, head up the joint MIT-Harvard Center for Ultracold Atoms and have spent the last few years trying to make photons interact with each other. Their first major success was in 2013, when the researchers managed to get two photons to bind together to create a new form of light—but they wanted to know if this was the limit to photon interactions.
“You can combine oxygen molecules to form O2 and O3, but not O4, and for some molecules you can’t form even a three-particle molecule,” Vuletic said in a statement. “So it was an open question: Can you add more photons to a molecule to make bigger and bigger things?”
Most particles acquire mass by interacting with the Higgs field, which is a ubiquitous field of energy. Photons, on the other hand, don’t interact with the Higgs field and have no mass, which is why two photons are able to pass through one another like ghosts if you were to, say, shine two flashlight beams at one another.
In order to get these massless particles of light to bind together like normal matter, Vuletic and Lukin created an experimental set-up that involves shining a laser through some very cold atoms. In particular, they were using a cloud of rubidium atoms chilled to just a millionth of a degree above absolute zero. This makes it so the rubidium atoms in the cloud are hardly moving. Then they shine a weak laser beam through the supercooled atomic cloud, so that only a few photons pass through to be measured on the other side of the apparatus.
What they found was that the protons emerging on the other side were strongly bound together with one another in groups of three, and had actually acquired a very small amount of mass (equal to just a fraction of the mass of an electron). As a result, these photon triplets moved 100,000 times slower than the speed of a normal photon, which travel at 300,000 kilometers per second.
But wait—there’s more.
Vuletic, Lukin and their colleagues developed a theory for what caused these photons to bind together like this in the first place. In this model, the photons basically skip from one rubidium atom to the next. While a photon is ‘on’ a rubidium atom, it can create a hybrid atom-photon called a polariton. If multiple polaritons are formed in the cloud, they can interact with one another by way of the rubidium element of the hybrid as the polaritons continue to move through the rubidium cloud. When the polaritons reach the ‘edge’ of the cloud, the rubidium atoms remain in the clouds while the still-bound-together photons exit. According to the researchers, this entire process occurs within a millionth of a second.
The important thing here is that this process allows photons to interact with one another when they otherwise wouldn’t. They are essentially entangled, a property that is integral to manipulating qubits in quantum devices. The photon triplets created by Vuletic and Lukin are an improvement over other photonic qubits, however, because they are much more strongly bound together and are, as a result, better carriers of information.
Given the highly experimental nature of Vuletic and Lukin’s research, it will likely be a while before it is put to any practical use. In fact, the researchers said they themselves often don’t know what to expect from their experiments. Going forward, they said they intend to figure out ways to cause other interactions among photons, such as making them repel one another.
“It’s completely novel in the sense that we don’t even know sometimes qualitatively what to expect,” Vuletic said in a statement. “With repulsion of photons, can they be such that they form a regular pattern, like a crystal of light, or will something else happen? It’s very uncharted territory.”
Earlier this week, the robotics company Boston Dynamics released a clip of their BigDog model—a four-legged, gas-powered, mobile machine—dexterously opening a door for its buddy. In light of the recent Black Mirror episode “Metalhead,” which features a similar dogbot equipped with a deadly arsenal, it’s understandably chilling to behold Boston Dynamics’ creations and their impressive abilities.
But the collective freakout over the canine door openers overshadowed another robot video, dropped by Purdue University’s School of Mechanical Engineering on Tuesday, which might actually outcreep the Boston Dynamics clip.
In contrast to the BigDogs, Purdue’s “microscale magnetic tumbling robots,” or microTUMs, are extremely small, measuring about 400 by 800 microns—about the size of a grain of sand. Shaped like dumbbells, the tiny machines are outfitted with magnetic end caps, enabling them to “tumble” continuously over a variety of terrains, powered by a shifting magnetic field.
So in case you were wondering what’s more frightening than large robots that open doors, the answer is microbots that can tumble under them in swarms. Wherever you are, these little bots will be able to squeeze past most any barrier and traverse most any landscape to reach you (and yes, there is a Black Mirror episode for that variety of nightmare too).
Your hackles might be raised even further once you learn that this Purdue study, led by graduate student Chenghao Bi and published in the journal Micromachines, proposes injecting these microTUMs into the human body.
Rest assured, though, that the team is not trying to pull some kind of Magic School Bus stunt. Instead, Purdue engineers hope to use the bots as a targeted drug delivery vector, with biomedical payloads packed onto the central module and directed to a specific location within the body, allowing for more precise medical treatments.
“We envision these robots being injected into the patient inside the body, in complex sticky terrains like tissue,” said study co-author David Cappelleri, director of Purdue’s Multi-Scale Robotics and Automation Lab, in the video. “You could imagine these robots maybe being in the stomach somewhere [tumbling] to their goal location, where they need to be.”
Now to answer the obvious question: Would you rather fight one BigDog-sized microTUM, or 100 microTUM-sized BigDogs?