robotics

Mars Rover Curiosity is switching brains so it can fix itself

Posted by | Gadgets, jpl, mars rover, NASA, robotics, science, Space, TC | No Comments

When you send something to space, it’s good to have redundancy. Sometimes you want to send two whole duplicate spacecraft just in case — as was the case with Voyager — but sometimes it’s good enough to have two of critical components. Mars Rover Curiosity is no exception, and it is now in the process of switching from one main “brain” to the other so it can do digital surgery on the first.

Curiosity landed on Mars with two central computing systems, Side-A and Side-B (not left brain and right brain — that would invite too much silliness). They’re perfect duplicates of each other, or were — it was something of a bumpy ride, after all, and cosmic radiation may flip a bit here and there.

The team was thankful to have made these preparations when, on sol 200 in February of 2013 (we’re almost to sol 2,200 now), the Side-A computer experienced a glitch that ended up taking the whole rover offline. The solution was to swap over to Side-B, which was up and running shortly afterwards and sending diagnostic data for its twin.

Having run for several years with no issues, Side-B is now, however, having its own problems. Since September 15 it has been unable to record mission data, and it doesn’t appear to be a problem that the computer can solve itself. Fortunately, in the intervening period, Side-A has been fixed up to working condition — though it has a bit less memory than it used to, since some corrupted sectors had to be quarantined.

“We spent the last week checking out Side A and preparing it for the swap,” said Steven Lee, deputy project manager of the Curiosity program at JPL, in a mission status report. “We are operating on Side A starting today, but it could take us time to fully understand the root cause of the issue and devise workarounds for the memory on Side B. It’s certainly possible to run the mission on the Side-A computer if we really need to. But our plan is to switch back to Side B as soon as we can fix the problem to utilize its larger memory size.”

No timeline just yet for how that will happen, but the team is confident that they’ll have things back on track soon. The mission isn’t in jeopardy — but this is a good example of how a good system of redundancies can add years to the life of space hardware.

Powered by WPeMatico

This autonomous spray-painting drone is a 21st-century tagger’s dream

Posted by | disney research, drones, Gadgets, hardware, robotics, TC | No Comments

Whenever I see an overpass or billboard that’s been tagged, I worry about the tagger and the danger they exposed themselves to in order to get that cherry spot. Perhaps this spray paint-toting drone developed by ETH Zurich and Disney Research will take some of the danger out of the hobby. It also could be used for murals and stuff, I guess.

Although it seems an obvious application in retrospect, there just isn’t a lot of drone-based painting being done out there. Consider: A company could shorten or skip the whole scaffolding phase of painting a building or advertisement, leaving the bulk of painting to a drone. Why not?

There just isn’t a lot of research into it yet, and like so many domain-specific applications, the problem is deceptively complex. This paper only establishes the rudiments of a system, but the potential is clearly there.

The drone used by the researchers is a DJI Matrice 1002, customized to have a sensing rig mounted on one side and a spraying assembly on the other, counterbalancing each other. The sprayer, notably, is not just a nozzle but a pan-and-tilt mechanism that allows details to be painted that the drone can’t be relied on to make itself. To be clear, we’re still talking broad strokes here, but accurate to an inch rather than three or four.

It’s also been modified to use wired power and a constant supply of paint, which simplifies the physics and also reduces limits on the size of the surface to be painted. A drone lugging its own paint can wouldn’t be able to fly far, and its thrust would have to be constantly adjusted to account for the lost weight of sprayed paint. See? Complex.

The first step is to 3D scan the surface to be painted; this can be done manually or via drone. The mesh is then compared to the design to be painted and a system creates a proposed path for the drone.

Lastly the drone is set free to do its thing. It doesn’t go super fast in this prototype form, nor should it, since even the best drones can’t stop on a dime, and tend to swing about when they reduce speed or change direction. Slow and steady is the word, following a general path to put the nozzle in range of where it needs to shoot. All the while it is checking its location against the known 3D map of the surface so it doesn’t get off track.

In case you’re struggling to see the “bear,” it’s standing up with its paws on a tree. That took me a long time to see, so I thought I’d spare you the trouble.

Let’s be honest: This thing isn’t going to do anything much more complicated than some line work or a fill. But for a lot of jobs that’s exactly what’s needed — and it’s often the type of work that’s the least suited to skilled humans, who would rather be doing stuff only they can do. A drone could fill in all the easy parts on a building and then the workers can do the painstaking work around the windows or add embellishments and details.

For now this is strictly foundational work — no one is going to hire this drone to draw a Matterhorn on their house — but there’s a lot of potential here if the engineering and control methods can be set down with confidence.

Powered by WPeMatico

‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely

Posted by | artificial intelligence, Gadgets, robotics, science, TC | No Comments

Autonomous vehicles and robots have to know how to get from A to B without hitting obstacles or pedestrians — but how can they do so politely and without disturbing nearby humans? That’s what Stanford’s Jackrabbot project aims to learn, and now a redesigned robot will be cruising campus learning the subtleties of humans negotiating one another’s personal space.

“There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said grad student Ashwini Pokle in a Stanford News release. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.”

Of course there are practical applications pertaining to last mile problems and robotic delivery as well. What do you do if someone stops in front of you? What if there’s a group running up behind? Experience is the best teacher, as usual.

The first robot was put to work in 2016, and has been hard at work building a model of how humans (well, mostly undergrads) walk around safely, avoiding one another while taking efficient paths, and signal what they’re doing the whole time. But technology has advanced so quickly that a new iteration was called for.

The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silivo Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle

The new robot has a vastly improved sensor suite compared to its predecessor: two Velodyne lidar units giving 360 degree coverage, plus a set of stereo cameras making up its neck that give it another depth-sensing 360 degree view. The cameras and sensors on its head can also be pointed wherever needed, of course, just like ours. All this imagery is collated by a pair of new GPUs in its base/body.

Amir Sadeghian, one of the researchers, said this makes Jackrabbot 2 “one of the most powerful robots of its size that has ever been built.”

This will allow the robot to sense human motion with a much greater degree of precision than before, and also operate more safely. It will also give the researchers a chance to see how the movement models created by the previous robot integrate with this new imagery.

The other major addition is a totally normal-looking arm that Jackrabbot 2 can use to gesture to others. After all, we do it, right? When it’s unclear who should enter a door first or what side of a path they should take, a wave of the hand is all it takes to clear things up. Usually. Hopefully this kinked little gripper accomplishes the same thing.

Jackrabbot 2 can zoom around for several hours at a time, Sadeghian said. “At this stage of the project for safety we have a human with a safety switch accompanying the robot, but the robot is able to navigate in a fully autonomous way.”

Having working knowledge of how people use the space around them and how to predict their movements will be useful to startups like Kiwi, Starship, and Marble. The first time a delivery robot smacks into someone’s legs is the last time they consider ordering something via one.

Powered by WPeMatico

Vtrus launches drones to inspect and protect your warehouses and factories

Posted by | artificial intelligence, Battlefield, Disrupt, disrupt sf 2018, Gadgets, hardware, robotics, Startups, TC | No Comments

Knowing what’s going on in your warehouses and facilities is of course critical to many industries, but regular inspections take time, money, and personnel. Why not use drones? Vtrus uses computer vision to let a compact drone not just safely navigate indoor environments but create detailed 3D maps of them for inspectors and workers to consult, autonomously and in real time.

Vtrus showed off its hardware platform — currently a prototype — and its proprietary SLAM (simultaneous location and mapping) software at TechCrunch Disrupt SF as a Startup Battlefield Wildcard company.

There are already some drone-based services for the likes of security and exterior imaging, but Vtrus CTO Jonathan Lenoff told me that those are only practical because they operate with a large margin for error. If you’re searching for open doors or intruders beyond the fence, it doesn’t matter if you’re at 25 feet up or 26. But inside a warehouse or production line every inch counts and imaging has to be carried out at a much finer scale.

As a result, dangerous and tedious inspections, such as checking the wiring on lighting or looking for rust under an elevated walkway, have to be done by people. Vtrus wouldn’t put those people out of work, but it might take them out of danger.

The drone, called the ABI Zero for now, is equipped with a suite of sensors, from ordinary RGB cameras to 360 ones and a structured-light depth sensor. As soon as it takes off, it begins mapping its environment in great detail: it takes in 300,000 depth points 30 times per second, combining that with its other cameras to produce a detailed map of its surroundings.

It uses this information to get around, of course, but the data is also streamed over wi-fi in real time to the base station and Vtrus’s own cloud service, through which operators and inspectors can access it.

The SLAM technique they use was developed in-house; CEO Renato Moreno built and sold a company (to Facebook/Oculus) using some of the principles, but improvements to imaging and processing power have made it possible to do it faster and in greater detail than before. Not to mention on a drone that’s flying around an indoor space full of people and valuable inventory.

On a full charge, ABI can fly for about 10 minutes. That doesn’t sound very impressive, but the important thing isn’t staying aloft for a long time — few drones can do that to begin with — but how quickly it can get back up there. That’s where the special docking and charging mechanism comes in.

The Vtrus drone lives on and returns to a little box, which when a tapped-out craft touches down, sets off a patented high-speed charging process. It’s contact-based, not wireless, and happens automatically. The drone can then get back in the air perhaps half an hour or so later, meaning the craft can actually be in the air for as much as six hours a day total.

Probably anyone who has had to inspect or maintain any kind of building or space bigger than a studio apartment can see the value in getting frequent, high-precision updates on everything in that space, from storage shelving to heavy machinery. You’d put in an ABI for every X square feet depending on what you need it to do; they can access each other’s data and combine it as well.

This frequency and the detail which which the drone can inspect and navigate means maintenance can become proactive rather than reactive — you see rust on a pipe or a hot spot on a machine during the drone’s hourly pass rather than days later when the part fails. And if you don’t have an expert on site, the full 3D map and even manual drone control can be handed over to your HVAC guy or union rep.

You can see lots more examples of ABI in action at the Vtrus website. Way too many to embed here.

Lenoff, Moreno, and third co-founder Carlos Sanchez, who brings the industrial expertise to the mix, explained that their secret sauce is really the software — the drone itself is pretty much off the shelf stuff right now, tweaked to their requirements. (The base is an original creation, of course.)

But the software is all custom built to handle not just high-resolution 3D mapping in real time but the means to stream and record it as well. They’ve hired experts to build those systems as well — the 6-person team already sounds like a powerhouse.

The whole operation is self-funded right now, and the team is seeking investment. But that doesn’t mean they’re idle: they’re working with major companies already and operating a “pilotless” program (get it?). The team has been traveling the country visiting facilities, showing how the system works, and collecting feedback and requests. It’s hard to imagine they won’t have big clients soon.

Powered by WPeMatico

Safety and inspection bot startup Gecko Robotics adds $7 million to the coffers

Posted by | founders fund, Gadgets, Gecko Robotics, hardware, industrial equipment, Justin Kan, machine learning, mark cuban, pittsburgh, Recent Funding, robotics, Silicon Valley, Startups, TC, Y Combinator, yc | No Comments

Gecko Robotics aims to save human lives at our nation’s power plants with its wall-climbing robots. To continue doing so, the startup tells TechCrunch it has just secured $7 million from a cadre of high-profile sources, including Founders Fund, Mark Cuban, The Westly Group, Justin Kan and Y Combinator.

We first reported on the Pittsburgh-based company when co-founder Jake Loosararian came to the TechCrunch TV studios to show off his device for the camera. Back then, Gecko was in the YC Spring 2016 cohort, working with several U.S. power plants and headed toward profitability, according to Loosararian. 

You can see the original interview below:

The type of robots Gecko makes are an important part of ensuring safety in industrial and power plant facilities as they are able to go ahead of humans to check for potential hazards. The robots can climb tanks, boilers, pipelines and other industrial equipment using proprietary magnetic adhesion, ultra-sonics, lasers and a variety of sensors to inspect structural integrity, according to a company release.

While not cheap — the robots run anywhere from $50,000 to $100,000 — they are also obviously a minuscule cost compared to human life.

Gecko robot scaling the wall for a safety inspection at a power plant.

Loosararian also mentioned his technology was faster and more accurate than what is out there at the moment by using machine learning “to solve some of the most difficult problems,” he told TechCrunch.

It’s also a unique enough idea to get the attention from several seasoned investors.

“There has been virtually no innovation in industrial services technology for decades,” Founders Fund partner Trae Stephens told TechCrunch in a statement. “Gecko’s robots massively reduce facility shutdown time while gathering critical performance data and preventing potentially fatal accidents. The demand for what they are building is huge.”

Those interested can see the robots in action in the video below:

Diesel_tank_A from Gecko Robotics, Inc on Vimeo.

Powered by WPeMatico

Autonomous retail startup Inokyo’s first store feels like stealing

Posted by | Apps, artificial intelligence, eCommerce, food, hardware, Inokyo, Mobile, robotics, Startups, TC, Y Combinator | No Comments

Inokyo wants to be the indie Amazon Go. It’s just launched its prototype cashierless autonomous retail store. Cameras track what you grab from shelves, and with a single QR scan of its app on your way in and out of the store, you’re charged for what you got.

Inokyo‘s first store is now open on Mountain View’s Castro Street selling an array of bougie kombuchas, snacks, protein powders and bath products. It’s sparse and a bit confusing, but offers a glimpse of what might be a commonplace shopping experience five years from now. You can get a glimpse yourself in our demo video below:

“Cashierless stores will have the same level of impact on retail as self-driving cars will have on transportation,” Inokyo co-founder Tony Francis tells me. “This is the future of retail. It’s inevitable that stores will become increasingly autonomous.”

Inokyo (rhymes with Tokyo) is now accepting signups for beta customers who want early access to its Mountain View store. The goal is to collect enough data to dictate the future product array and business model. Inokyo is deciding whether it wants to sell its technology as a service to other retail stores, run its own stores or work with brands to improve their product’s positioning based on in-store sensor data on custom behavior.

We knew that building this technology in a lab somewhere wouldn’t yield a successful product,” says Francis. “Our hypothesis here is that whoever ships first, learns in the real world and iterates the fastest on this technology will be the ones to make these stores ubiquitous.” Inokyo might never rise into a retail giant ready to compete with Amazon and Whole Foods. But its tech could even the playing field, equipping smaller businesses with the tools to keep tech giants from having a monopoly on autonomous shopping experiences.

It’s about what cashiers do instead

Amazon isn’t as ahead as we assumed,” Francis remarks. He and his co-founder Rameez Remsudeen took a trip to Seattle to see the Amazon Go store that first traded cashiers for cameras in the U.S. Still, they realized, “This experience can be magical.” The two met at Carnegie Mellon through machine learning classes before they went on to apply that knowledge at Instagram and Uber. The two decided that if they jumped into autonomous retail soon enough, they could still have a say in shaping its direction.

Next week, Inokyo will graduate from Y Combinator’s accelerator that provided its initial seed funding. In six weeks during the program, they found a retail space on Mountain View’s main drag, studied customer behaviors in traditional stores, built an initial product line and developed the technology to track what users are taking off the shelves.

Here’s how the Inokyo store works. You download its app and connect a payment method, and you get a QR code that you wave in front of a little sensor as you stroll into the shop. Overhead cameras will scan your body shape and clothing without facial recognition in order to track you as you move around the store. Meanwhile, on-shelf cameras track when products are picked up or put back. Combined, knowing who’s where and what’s grabbed lets it assign the items to your cart. You scan again on your way out, and later you get a receipt detailing the charges.

Originally, Inokyo actually didn’t make you scan on the way out, but it got the feedback that customers were scared they were actually stealing. The scan-out is more about peace of mind than engineering necessity. There is a subversive pleasure to feeling like, “well, if Inokyo didn’t catch all the stuff I chose, that’s not my problem.” And if you’re overcharged, there’s an in-app support button for getting a refund.

Inokyo co-founders (from left): Tony Francis and Rameez Remsudeen

Inokyo was accurate in what it charged me despite me doing a few switcharoos with products I nabbed. But there were only about three people in the room at the time. The real test for these kinds of systems are when a rush of customers floods in and cameras have to differentiate between multiple similar-looking people. Inokyo will likely need to be more than 99 percent accurate to be more of a help than a headache. An autonomous store that constantly over- or undercharges would be more trouble than it’s worth, and patrons would just go to the nearest classic shop.

Just because autonomous retail stores will be cashier-less doesn’t mean they’ll have no staff. To maximize cost-cutting, they could just trust that people won’t loot it. However, Inokyo plans to have someone minding the shop to make sure people scan in the first place and to answer questions about the process. But there’s also an opportunity in reassigning labor from being cashiers to concierges that can recommend the best products or find what’s the right fit for the customer. These stores will be judged by the convenience of the holistic experience, not just the tech. At the very least, a single employee might be able to handle restocking, customer support and store maintenance once freed from cashier duties.

The Amazon Go autonomous retail store in Seattle is equipped with tons of overhead cameras

While Amazon Go uses cameras in a similar way to Inokyo, it also relies on weight sensors to track items. There are plenty of other companies chasing the cashierless dream. China’s BingoBox has nearly $100 million in funding and has more than 300 stores, though they use less sophisticated RFID tags. Fellow Y Combinator startup Standard Cognition has raised $5 million to equip old-school stores with autonomous camera-tech. AiFi does the same, but touts that its cameras can detect abnormal behavior that might signal someone is a shoplifter.

The store of the future seems like more and more of a sure thing. The race’s winner will be determined by who builds the most accurate tracking software, easy-to-install hardware and pleasant overall shopping flow. If this modular technology can cut costs and lines without alienating customers, we could see our local brick-and-mortars adapt quickly. The bigger question than if or even when this future arrives is what it will mean for the millions of workers who make their living running the checkout lane.

Powered by WPeMatico

This robot maintains tender, unnerving eye contact

Posted by | Gadgets, hardware, robocalypse, robotics, WTF | No Comments

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

Powered by WPeMatico

This bipedal robot has a flying head

Posted by | Gadgets, machine learning, mobile robot, robot, robotics, TC, university of tokyo | No Comments

Making a bipedal robot is hard. You have to make sure maintain exquisite balance at all times and, even with the amazing things Atlas can do, there is still a chance that your crazy robot will fall over and bop its electronic head. But what if that head is a quadcopter?

University of Tokyo have done just that with their wild Aerial-Biped. The robot isn’t completely bipedal but it’s designed instead to act like a bipedal robot without the tricky issue of being truly bipedal. Think of the these legs as more a sort of fun bit of puppetry that mimics walking but doesn’t really walk.

“The goal is to develop a robot that has the ability to display the appearance of bipedal walking with dynamic mobility, and to provide a new visual experience. The robot enables walking motion with very slender legs like those of a flamingo without impairing dynamic mobility. This approach enables casual users to choreograph biped robot walking without expertise. In addition, it is much cheaper compared to a conventional bipedal walking robot,” the team told IEEE.

The robot is similar to the bizarre-looking Ballu, a blimp robot with a floating head and spindly legs. The new robot learned how to walk convincingly through machine learning, a feat that gives it a realistic gait even though it is really an aerial system. It’s definitely a clever little project and could be interesting at a theme park or in an environment where a massive bipedal robot falling over on someone might be discouraged.

Powered by WPeMatico

This happy robot helps kids with autism

Posted by | Co-founder, Gadgets, iPad, LuxAI, robot, robotics, TC, therapist | No Comments

A little bot named QTrobot from LuxAI could be the link between therapists, parents, and autistic children. The robot, which features an LCD face and robotic arms, allows kids who are overwhelmed by human contact to become more comfortable in a therapeutic setting.

The project comes from LuxAI, a spin-off of the University of Luxembourg. They will present their findings at the RO-MAN 2018 conference at the end of this month.

“The robot has the ability to create a triangular interaction between the human therapist, the robot, and the child,” co-founder Aida Nazarikhorram told IEEE. “Immediately the child starts interacting with the educator or therapist to ask questions about the robot or give feedback about its behavior.”

The robot reduces anxiety in autistic children and the researchers saw many behaviors – hand flapping, for example – slow down with the robot in the mix.

Interestingly the robot is a better choice for children than an app or tablet. Because the robot is “embodied,” the researchers found that it that draws attention and improves learning, especially when compared to a standard iPad/educational app pairing. In other words children play with tablets and work with robots.

The robot is entirely self-contained and easily programmable. It can run for hours at a time and includes a 3D camera and full processor.

The researchers found that the robot doesn’t become the focus of the therapy but instead helps the therapist connect with the patient. This, obviously, is an excellent outcome for an excellent (and cute) little piece of technology.

Powered by WPeMatico

Analysis backs claim drones were used to attack Venezuela’s president

Posted by | dji, DJI Matrice 600, drone, Emerging-Technologies, Gadgets, hardware, robotics, satellite imagery, Venezuela | No Comments

Analysis of open source information carried out by the investigative website Bellingcat suggests drones that had been repurposed as flying bombs were indeed used in an attack on the president of Venezuela this weekend.

The Venezuelan government claimed three days ago that an attempt had been made to assassinate President Nicolás Maduro using two drones loaded with explosives. The president had been giving a speech which was being broadcast live on television when the incident occurred.

Initial video from a state-owned television network showed the reaction of Maduro, those around him and a parade of soldiers at the event to what appeared to be two blasts somewhere off camera. But the footage did not include shots of any drones or explosions.

AP also reported that firefighters at scene had shed doubt on the drone attack claim — suggesting there had instead been a gas explosion in a nearby flat.

Since then more footage has emerged, including videos purporting to show a drone exploding and a drone tumbling alongside a building.

Vídeo prueba del segundo drone que exploto en el aire sin causar daños colaterales #Sucesos Vídeo cortesía pic.twitter.com/ipWR2sbYvW

— Caracas News 24 🌐 (@CaracasNews24) August 5, 2018

Bellingcat has carried out an analysis of publicly available information related to the attack, including syncing timings of the state broadcast of Maduro’s speech, and using frame-by-frame analysis combined with photos and satellite imagery of Caracas to try to pinpoint locations of additional footage that has emerged to determine whether the drone attack claim stands up.

The Venezuelan government has claimed the drones used were DJI Matrice 600s, each carrying approximately 1kg of C4 plastic explosive and, when detonated, capable of causing damage at a radius of around 50 meters.

DJI Matrice 600 drones are a commercial model, normally used for industrial work — with a U.S. price tag of around $5,000 apiece, suggesting the attack could have cost little over $10k to carry out — with 1kg of plastic explosive available commercially (for demolition purposes) at a cost of around $30.

Bellingcat says its analysis supports the government’s claim that the drone model used was a DJI Matrice 600, noting that the drones involved in the event each had six rotors. It also points to a photo of drone wreckage which appears to show the distinctive silver rotor tip of the model, although it also notes the drones appear to have had their legs removed.

Venezuela’s interior minister, Nestor Reverol, also claimed the government thwarted the attack using “special techniques and [radio] signal inhibitors”, which “disoriented” the drone that detonated closest to the presidential stand — a capability Bellingcat notes the Venezuelan security services are reported to have.

The second drone was said by Reverol to have “lost control” and crashed into a nearby building.

Bellingcat says it is possible to geolocate the video of the falling drone to the same location as the fire in the apartment that firefighters had claimed was caused by a gas canister explosion. It adds that images taken of this location during the fire show a hole in the wall of the apartment in the vicinity of where the drone would have crashed.

“It is a very likely possibility that the downed drone subsequently detonated, creating the hole in the wall of this apartment, igniting a fire, and causing the sound of the second explosion which can be heard in Video 2 [of the state TV broadcast of Maduro’s speech],” it further suggests.

Here’s its conclusion:

From the open sources of information available, it appears that an attack took place using two DBIEDs while Maduro was giving a speech. Both the drones appear visually similar to DJI Matrice 600s, with at least one displaying features that are consistent with this model. These drones appear to have been loaded with explosive and flown towards the parade.

The first drone detonated somewhere above or near the parade, the most likely cause of the casualties announced by the Venezuelan government and pictured on social media. The second drone crashed and exploded approximately 14 seconds later and 400 meters away from the stage, and is the most likely cause of the fire which the Venezuelan firefighters described.

It also considers the claim of attribution by a group on social media, calling itself “Soldados de Franelas” (aka ‘T-Shirt Soldiers’ — a reference to a technique used by protestors wrapping a t-shirt around their head to cover their face and protect their identity), suggesting it’s not clear from the group’s Twitter messages that they are “unequivocally claiming responsibility for the event”, owing to use of passive language, and to a claim that the drones were shot down by government snipers — which it says “does not appear to be supported by the open source information available”.

Powered by WPeMatico