stanford

Stanford’s Doggo is a petite robotic quadruped you can (maybe) build yourself

Posted by | Gadgets, hardware, robotics, science, stanford, Stanford University | No Comments

Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.

Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.

The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.

It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

In the meantime, the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.

Powered by WPeMatico

Snap CEO’s sister Caroline Spiegel starts a no-visuals porn site

Posted by | Apps, Entertainment, erotica, Evan Spiegel, funding, Fundings & Exits, Media, Mobile, pornhub, pornography, Recent Funding, Social, stanford, Startups, TC | No Comments

If you took the photos and videos out of pornography, could it appeal to a new audience? Caroline Spiegel’s first startup Quinn aims to bring some imagination to adult entertainment. Her older brother, Snapchat CEO Evan Spiegel, spent years trying to convince people his app wasn’t just for sexy texting. Now Caroline is building a website dedicated to sexy text and audio. The 22-year-old college senior tells TechCrunch that on April 13th she’ll launch Quinn, which she describes as “a much less gross, more fun Pornhub for women.”

TechCrunch checked out Quinn’s private beta site, which is pretty bare bones right now. Caroline tells us she’s already raised less than a million dollars for the project. But given her brother’s success spotting the next generation’s behavior patterns and turning them into beloved products, Caroline might find investors are eager to throw cash at Quinn. That’s especially true given she’s taking a contrarian approach. There will be no imagery on Quinn.

Caroline explains that “There’s no visual content on the site — just audio and written stories. And the whole thing is open source, so people can submit content and fantasies, etc. Everything is vetted by us before it goes on the site.” The computer science major is building Quinn with a three-woman team of her best friends she met while at Stanford, including Greta Meyer, though they plan to relocate to LA after graduation.

“His dream girl was named ‘Quinn’ “

The idea for Quinn sprung from a deeply personal need. “I came up with it because I had to leave Stanford my junior year because I was struggling with anorexia and sexual dysfunction that came along with that,” Caroline tells me. “I started to do a lot of research into sexual dysfunction cures. There are about 30 FDA-approved drugs for sexual dysfunction for men but zero for women, and that’s a big bummer.”

She believes there’s still a stigma around women pleasuring themselves, leading to a lack of products offering assistance. Sure, there are plenty of porn sites, but few are explicitly designed for women, and fewer stray outside of visual content. Caroline says photos and videos can create body image pressure, but with text and audio, anyone can imagine themselves in a scene. “Most visual media perpetuates the male gaze … all mainstream porn tells one story … You don’t have to fit one idea of what a woman should look like.”

That concept fits with the startup’s name “Quinn,” which Caroline says one of her best guy friends thought up. “He said this girl he met — his dream girl — was named ‘Quinn.’ ”

Caroline took to Reddit and Tumblr to find Quinn’s first creators. Reddit stuck to text and links for much of its history, fostering the kinky literature and audio communities. And when Tumblr banned porn in December, it left a legion of adult content makers looking for a new home. “Our audio ranges from guided masturbation to overheard sex, and there’s also narrated stories. It’s literally everything. Different strokes for different for folks, know what I mean?” Caroline says with a cheeky laugh.

To establish its brand, Quinn is running social media influencer campaigns where “The basic idea is to make people feel like it’s okay to experience pleasure. It’s hard to make something like masturbation cool, so that’s a little bit of a lofty goal. We’re just trying to make it feel okay, and even more okay than it is for men.”

As for the business model, Caroline’s research found younger women were embarrassed to pay for porn. Instead, Quinn plans to run ads, though there could be commerce opportunities too. And because the site doesn’t bombard users with nude photos or hardcore videos, it might be able to attract sponsors that most porn sites can’t.

Evan is “very supportive”

Until monetization spins up, Quinn has the sub-$1 million in funding that Caroline won’t reveal the source of, though she confirms it’s not from her brother. “I wouldn’t say that he’s particularly involved other than he’s one of the most important people in my life and I talk to him all the time. He gives me the best advice I can imagine,” the younger sibling says. “He doesn’t have any qualms, he’s very supportive.”

Quinn will need all the morale it can get, as Caroline bluntly admits, “We have a lot of competitors.” There’s the traditional stuff like Pornhub, user-generated content sites like Make Love Not Porn and spontaneous communities like on Reddit. She calls $5 million-funded audio porn startup Dipsea “an exciting competitor,” though she notes that “we sway a little more erotic than they do, but we’re so supportive of their mission.” How friendly.

Quinn’s biggest rival will likely be outdated but institutionalized site Literotica, which SimilarWeb ranks as the 60th most popular adult website, 631st most visited site overall, showing it gets 53 million hits per month. But the fact that Literotica looks like a web 1.0 forum yet has so much traffic signals a massive opportunity for Quinn. With rules prohibiting Quinn from launching native mobile apps, it will have to put all its effort into making its website stand out if it’s going to survive.

But more than competition, Caroline fears that Quinn will have to convince women to give its style of porn a try. “Basically, there’s this idea that for men, masturbation is an innate drive and for women it’s a ‘could do without it, could do with it.’ Quinn is going to have to make a market alongside a product and that terrifies me,” Caroline says, her voice building with enthusiasm. “But that’s what excites me the most about it, because what I’m banking on is if you’ve never had chocolate before, you don’t know. But once you have it, you start craving it. A lot of women haven’t experienced raw, visceral pleasure before, [but once we help them find it] we’ll have momentum.”

Most importantly, Quinn wants all women to feel they have rightful access to whatever they fancy. “It’s not about deserving to feel great. You don’t have to do Pilates to use this. You don’t have to always eat right. There’s no deserving with our product. Our mission is for women to be more in touch with themselves and feel fucking great. It’s all about pleasure and good vibes.”

Powered by WPeMatico

This self-driving AI faced off against a champion racer (kind of)

Posted by | artificial intelligence, Audi, automotive, Gadgets, hardware, robotics, science, self-driving cars, stanford, Stanford University, Transportation | No Comments

Developments in the self-driving car world can sometimes be a bit dry: a million miles without an accident, a 10 percent increase in pedestrian detection range, and so on. But this research has both an interesting idea behind it and a surprisingly hands-on method of testing: pitting the vehicle against a real racing driver on a course.

To set expectations here, this isn’t some stunt, it’s actually warranted given the nature of the research, and it’s not like they were trading positions, jockeying for entry lines, and generally rubbing bumpers. They went separately, and the researcher, whom I contacted, politely declined to provide the actual lap times. This is science, people. Please!

The question which Nathan Spielberg and his colleagues at Stanford were interested in answering has to do with an autonomous vehicle operating under extreme conditions. The simple fact is that a huge proportion of the miles driven by these systems are at normal speeds, in good conditions. And most obstacle encounters are similarly ordinary.

If the worst should happen and a car needs to exceed these ordinary bounds of handling — specifically friction limits — can it be trusted to do so? And how would you build an AI agent that can do so?

The researchers’ paper, published today in the journal Science Robotics, begins with the assumption that a physics-based model just isn’t adequate for the job. These are computer models that simulate the car’s motion in terms of weight, speed, road surface, and other conditions. But they are necessarily simplified and their assumptions are of the type to produce increasingly inaccurate results as values exceed ordinary limits.

Imagine if such a simulator simplified each wheel to a point or line when during a slide it is highly important which side of the tire is experiencing the most friction. Such detailed simulations are beyond the ability of current hardware to do quickly or accurately enough. But the results of such simulations can be summarized into an input and output, and that data can be fed into a neural network — one that turns out to be remarkably good at taking turns.

The simulation provides the basics of how a car of this make and weight should move when it is going at speed X and needs to turn at angle Y — obviously it’s more complicated than that, but you get the idea. It’s fairly basic. The model then consults its training, but is also informed by the real-world results, which may perhaps differ from theory.

So the car goes into a turn knowing that, theoretically, it should have to move the wheel this much to the left, then this much more at this point, and so on. But the sensors in the car report that despite this, the car is drifting a bit off the intended line — and this input is taken into account, causing the agent to turn the wheel a bit more, or less, or whatever the case may be.

And where does the racing driver come into it, you ask? Well, the researchers needed to compare the car’s performance with a human driver who knows from experience how to control a car at its friction limits, and that’s pretty much the definition of a racer. If your tires aren’t hot, you’re probably going too slow.

The team had the racer (a “champion amateur race car driver,” as they put it) drive around the Thunderhill Raceway Park in California, then sent Shelley — their modified, self-driving 2009 Audi TTS — around as well, ten times each. And it wasn’t a relaxing Sunday ramble. As the paper reads:

Both the automated vehicle and human participant attempted to complete the course in the minimum amount of time. This consisted of driving at accelerations nearing 0.95g while tracking a minimum time racing trajectory at the the physical limits of tire adhesion. At this combined level of longitudinal and lateral acceleration, the vehicle was able to approach speeds of 95 miles per hour (mph) on portions of the track.

Even under these extreme driving conditions, the controller was able to consistently track the racing line with the mean path tracking error below 40 cm everywhere on the track.

In other words, while pulling a G and hitting 95, the self-driving Audi was never more than a foot and a half off its ideal racing line. The human driver had much wider variation, but this is by no means considered an error — they were changing the line for their own reasons.

“We focused on a segment of the track with a variety of turns that provided the comparison we needed and allowed us to gather more data sets,” wrote Spielberg in an email to TechCrunch. “We have done full lap comparisons and the same trends hold. Shelley has an advantage of consistency while the human drivers have the advantage of changing their line as the car changes, something we are currently implementing.”

Shelley showed far lower variation in its times than the racer, but the racer also posted considerably lower times on several laps. The averages for the segments evaluated were about comparable, with a slight edge going to the human.

This is pretty impressive considering the simplicity of the self-driving model. It had very little real-world knowledge going into its systems, mostly the results of a simulation giving it an approximate idea of how it ought to be handling moment by moment. And its feedback was very limited — it didn’t have access to all the advanced telemetry that self-driving systems often use to flesh out the scene.

The conclusion is that this type of approach, with a relatively simple model controlling the car beyond ordinary handling conditions, is promising. It would need to be tweaked for each surface and setup — obviously a rear-wheel-drive car on a dirt road would be different than front-wheel on tarmac. How best to create and test such models is a matter for future investigation, though the team seemed confident it was a mere engineering challenge.

The experiment was undertaken in order to pursue the still-distant goal of self-driving cars being superior to humans on all driving tasks. The results from these early tests are promising, but there’s still a long way to go before an AV can take on a pro head-to-head. But I look forward to the occasion.

Powered by WPeMatico

Inspired by spiders and wasps, these tiny drones pull 40x their own weight

Posted by | drones, Gadgets, robotics, science, stanford, Stanford University, UAVs | No Comments

If we want drones to do our dirty work for us, they’re going to need to get pretty good at hauling stuff around. But due to the pesky yet unavoidable restraints of physics, it’s hard for them to muster the forces necessary to do so while airborne — so these drones brace themselves against the ground to get the requisite torque.

The drones, created by engineers at Stanford and Switzerland’s EPFL, were inspired by wasps and spiders that need to drag prey from place to place but can’t actually lift it, so they drag it instead. Grippy feet and strong threads or jaws let them pull objects many times their weight along the ground, just as you might slide a dresser along rather than pick it up and put it down again. So I guess it could have also just been inspired by that.

Whatever the inspiration, these “FlyCroTugs” (a combination of flying, micro and tug presumably) act like ordinary tiny drones while in the air, able to move freely about and land wherever they need to. But they’re equipped with three critical components: an anchor to attach to objects, a winch to pull on that anchor and sticky feet to provide sure grip while doing so.

“By combining the aerodynamic forces of our vehicle and the interactive forces generated by the attachment mechanisms, we were able to come up with something that is very mobile, very strong and very small,” said Stanford grad student Matthew Estrada, lead author of the paper published in Science Robotics.

The idea is that one or several of these ~100-gram drones could attach their anchors to something they need to move, be it a lever or a piece of trash. Then they take off and land nearby, spooling out thread as they do so. Once they’re back on terra firma they activate their winches, pulling the object along the ground — or up over obstacles that would have been impossible to navigate with tiny wheels or feet.

Using this technique — assuming they can get a solid grip on whatever surface they land on — the drones are capable of moving objects 40 times their weight — for a 100-gram drone like that shown, that would be about 4 kilograms, or nearly 9 pounds. Not quickly, but that may not always be a necessity. What if a handful of these things flew around the house when you were gone, picking up bits of trash or moving mail into piles? They would have hours to do it.

As you can see in the video below, they can even team up to do things like open doors.

“People tend to think of drones as machines that fly and observe the world,” said co-author of the paper, EPFL’s Dario Floreano, in a news release. “But flying insects do many other things, such as walking, climbing, grasping and building. Social insects can even work together and combine their strength. Through our research, we show that small drones are capable of anchoring themselves to surfaces around them and cooperating with fellow drones. This enables them to perform tasks typically assigned to humanoid robots or much larger machines.”

Unless you’re prepared to wait for humanoid robots to take on tasks like this (and it may be a decade or two), you may have to settle for drone swarms in the meantime.

Powered by WPeMatico

VR optics could help old folks keep the world in focus

Posted by | accessibility, disability, Gadgets, hardware, Health, science, siggraph, stanford, Stanford University, TC, Wearables | No Comments

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.

There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?

That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.

Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.

This is an old prototype, but you get the idea.

It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.

Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.

In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.

The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.

“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”

The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Powered by WPeMatico

Autonomous cars could peep around corners via bouncing laser

Posted by | automotive, Computer Vision, Gadgets, lasers, science, stanford, Stanford University, TC | No Comments

 Autonomous cars gather up tons of data about the world around them, but even the best computer vision systems can’t see through brick and mortar. But by carefully monitoring the reflected light of a laser bouncing off a nearby surface, they might be able to see around corners — that’s the idea behind recently published research from Stanford engineers. Read More

Powered by WPeMatico

Research heralds better and bidirectional brain-computer interfaces

Posted by | accessibility, brain-computer interface, Gadgets, Health, science, stanford, TC | No Comments

stanford_bci_header A pair of studies, one from Stanford and another from the University of Geneva, exemplify the speed with which brain-computer interfaces are advancing; and while you won’t be using one instead of a mouse and keyboard any time soon, even in its nascent form the tech may prove transformative for the disabled. Read More

Powered by WPeMatico

Stanford Uses Virtual Reality To Make Its Heisman Pitch For Christian McCaffrey

Posted by | Apps, college football, Education, Entertainment, heisman, Media, Mobile, ncaa, stanford, Video, Virtual reality, VR | No Comments

Screen Shot 2015-12-07 at 10.24.33 AM Whether you’re a sports fan or not, it’s hard not getting excited around bowl season in college football. More exciting than that is watching all of the teams push their own candidates for the Heisman Trophy, given out to this year’s best NCAA football player. Read More

Powered by WPeMatico