Sam Bartusek of Duro UAS recently spoke with Dr. Brendan Englot, researcher and assistant professor at Stevens Institute of Technology about his fascinating work in underwater robotics — his current lab develops software that makes robots more autonomous and self-sufficient, especially in the context of complex environments and uncertain conditions.
Guest Post by Sam Barusek, Duro UAS:
SB: Thanks so much for taking the time to talk with us. Could you start by telling me a bit of personal background? Where did you grow up and how did you get involved with robotics?
BE: I’m pretty local; I grew up in Queens, New York City. I did my undergraduate, graduate, and PhD studies at MIT, where as an undergraduate I identified that I wanted to study robotics, and then I tried to find opportunities to do that in graduate school.
It was during the process of getting accepted into graduate school and trying to find a thesis project that an opportunity arose to work on a robot that was being designed to perform an autonomous in-water ship hull inspection. It was a prototype that had been in development at MIT for about five years before that, and it was getting to a state of maturity where they needed new students to help with the algorithms and the autonomy of the robot. It had already become a pretty capable physical platform and now they were trying to make it more intelligent.
The project involved working on algorithms for autonomous navigation that would be used by that robot to inspect a ship hull, to know exactly where it is relative to the ship at all times, and ideally also detect any anomalies along the hull. In particular, they were interested in minesweeping, so the goal there was to sweep the hull with sonar and with a camera and try to detect if any mines were planted on the hull. I worked on that throughout my entire time in graduate school, from 2007 to 2012.
SB: Was that your first underwater project?
BE: That was the very first marine robotics project I got involved in. All I knew at that point was that I wanted to study robotics, and I was looking for opportunities to study it in grad school. It ultimately came down to which project was the first one where funding was available, and that was the one. And then I ended up getting really immersed in that field. I grew to really like it and wanted to continue working in that area. I was largely motivated by the fact that there are such tough navigation problems underwater: sensing is tough, control is tough. I think some of the biggest challenges in autonomy are in that domain.
SB: Have you done any work in Industry?
BE: Yes, after I got my PhD I spent a little time working in Industry. I spent about two years after my PhD working at United Technologies, which is a large aerospace and building technologies company, looking more at the aerospace side of robotics. I was working specifically with the Sikorsky helicopter company, who makes the Blackhawk helicopter, which is widely used by the Army.
Their interest was creating technology to add to their existing helicopters that would allow them to fly unmanned if necessary. If the helicopter had to fly into a contested area, to perform a medical evacuation for example, then they could send it in without a pilot, and it wouldn’t be as bad or costly if the helicopter were to be shot down. The work there involved adding a whole autonomy pipeline — sensors, algorithms — to a helicopter, so they could fly an unmanned mission from takeoff to landing.
SB: So to dive into that minesweeping technology, how does that actually work? Could you give me some of the details of that process?
BE: This robot was designed to basically do something equivalent to ‘mowing the lawn’ — going back and forth in a big zig-zag pattern along the hull sweeping everything. In the course of doing so it would build a map, looking at the hull, and try to both a) use that information to very accurately localize the robot along the hull, and b) detect mine-shaped objects if there were any in the robot’s fields of view. The Office of Naval Research was funding this project, and they have a very large effort dedicated to mine countermeasures — both the kind that we were looking for that could have been planted on ships or structures, and also mines that might be buried on the sea-floor or that are floating. Ultimately any sort of mines that pose danger to Navy ships.
SB: Are there any civilian applications for this technology?
BE: One of the challenging things is that although you could use that technology to detect a mine, I think it would be very difficult to detect something like a structural crack. You might be able to do it with a camera, but with a sonar, you might not have the clarity or resolution you would need in order to detect something like a crack.
For civilian applications, I think if you’re interested in just getting a general idea of what’s under the water, if you want to build a map or model of the structures underwater, then it’s perfectly fine for doing that. If you want to find a needle in a haystack, it might be harder to do, depending on how large the item is that you’re looking for. The mines that were of interest in this program were primarily limpet mines, which are magnetically attached to a steel structure — they’re about the size of your wallet, so they’re large enough that even a sonar would be capable of detecting them.
SB: So how is autonomous technology better than other technologies and methods, like human divers or even animals?
BE: The predominant method is relying on human divers who do these sweeps manually. A team of divers will go down and check the ship, shining flashlights at it while trying to make sure they’ve swept the whole thing and can confirm there are no mines on it. There is also a marine mammal program that the Navy runs, where they train dolphins, sea lions, and other mammals to try to perform these kinds of sweeps themselves.
I don’t know to what extent that method is used in practice. I know the capability exists, I just don’t know how widely they use it. I’ve been told there are some flaws with that capability, just because sometimes you can’t be a hundred percent certain that the animal has looked at everything. The animals are good at recognizing certain shapes and letting you know when and where they’ve found them, but I think there are also issues where they can be a little temperamental and if you have a critical wartime scenario, they might rely on human divers instead.
So the idea was to keep any living thing out of potential danger in the water by doing this in a completely automated manner. The main thing that I was contributing to that project was the motion-planning algorithm, which this robot would use when it had to look at areas where ‘mowing the lawn’ was not sufficient — specifically areas like the stern of the ship, where you have to look for a mine but you have to look above and between the rudders, the shaft, the propellers and all of the complex structures that would be harder to sweep. The prototype that was developed during this project is now a commercial product and is being produced in quantity.
You can read the full-length interview with Dr. Brendan Englot on the Duro UAS blog here.