인간을 따라 굴착하는 법을 배우는 건설 로봇 VIDEO: Construction Robots Learn to Excavate by Mimicking Humans

Construction Robots Learn to Excavate by Mimicking Humans

Human movements can teach robots the skills they need to dig holes and—maybe someday—build the first colonies on Mars

By Lynne Peskoe-Yang


SE4 research engineer Nathan Quinn, wearing a VR headset and using handheld controls, showed Squeezie the excavator robot how to stack blocks at SIGGRAPH in July.


Photo: Sam Thomason/SE4


 

인간을 따라 굴착하는 법을 배우는 건설 로봇


인간의 움직임은 로봇에게 구멍을 파는 데 필요한 기술을 가르칠 수 있고, 아마도 언젠가 화성 최초의 식민지를 건설할 수 있을 것이다.


VR 헤드셋을 착용하고 휴대용 제어장치를 사용하는 SE4 연구 엔지니어 네이선 퀸은 7월 스퀴지 굴착기 로봇에게 어떻게 블록을 쌓는지 보여주었다.




파벨 사브킨은 로봇이 자신의 동작을 따라하는 것을 처음 보았을 때를 기억한다. 몇 분 전에, 그 엔지니어는 로봇 굴착기의 움직임을 수동으로 지시함으로써 새로운 목표를 "표시"하는 것을 끝냈다. 이제, 사브킨이 디자인을 도운 소프트웨어로 실행하면서, 로봇은 몸짓으로 그의 동작을 재현하고 있었다. 그는 "거기에 뭔가 살아 있는 것 같았다. 하지만 나는 그것이 나라는 것을 알았다"고 말했다.


사브킨은 SE4의 CTO로, 결국 우주에 인간의 식민지를 건설하게 될 일련의 로봇들의 "드라이버" 스타일을 스스로 만드는 로봇 소프트웨어 프로젝트다. SE4는 현재 자체 하드웨어 구축보다는 개발자가 로봇과 소통할 수 있는 소프트웨어를 만드는 데 주력하고 있다.


도쿄에 본사를 둔 이 스타트업은 지난 7월 SIGGRAPH에서 SE4의 독점 소프트웨어를 운영하고 있던 유니버설 로봇의 산업부문을 선보였다. SE4가 로스앤젤레스 혁신 컨퍼런스에서 시범을 보인 것은 이 회사의 가장 많은 관객들을 끌어 모았다. SE4 연구 엔지니어인 Nathan Quinn이 지시하는 대로 스퀴지라는 별명을 가진 이 로봇은 실제 블록을 쌓았는데, 그는 VR 헤드셋을 착용하고 손잡이를 이용해 스퀴지에게 무엇을 해야 할지를 보여주었다.




퀸이 가상의 3D 공간에서 블록을 조작했을 때, 소프트웨어는 실제 세계에서 수행될 일련의 명령어들을 배웠다. 그 명령은 원격 작전에 필수적이라고 퀸은 말한다. 원격으로 건설하기 위해서는 개발자들이 위치의 로봇 제작자들에게 지시를 전달할 수 있는 방법이 필요하다. 디지털 건설과 산업용 로봇 공학 시대에, 무엇을 만들 것인가에 대한 청사진을 컴퓨터에 제공하는 것은 잘 탐구된 예술이다. 그러나 멀리 떨어진 물체(특히 인간이 직접 경험하지 못한 조건)에서 운용하는 것은 운영자와의 실시간 통신만이 해결할 수 있는 과제를 제시한다.


via youtube


황기철 콘페이퍼 에디터 큐레이터

Ki Chul Hwang, conpaper editor, curator


edited by kcontents


Pavel Savkin remembers the first time he watched a robot imitate his movements. Minutes earlier, the engineer had finished “showing” the robotic excavator its new goal by directing its movements manually. Now, running on software Savkin helped design, the robot was reproducing his movements, gesture for gesture. “It was like there was something alive in there—but I knew it was me,” he said.




Savkin is the CTO of SE4, a robotics software project that styles itself the “driver” of a fleet of robots that will eventually build human colonies in space. For now, SE4 is focused on creating software that can help developers communicate with robots, rather than on building hardware of its own.


The Tokyo-based startup showed off an industrial arm from Universal Robots that was running SE4’s proprietary software at SIGGRAPH in July. SE4’s demonstration at the Los Angeles innovation conference drew the company’s largest audience yet. The robot, nicknamed Squeezie, stacked real blocks as directed by SE4 research engineer Nathan Quinn, who wore a VR headset and used handheld controls to “show” Squeezie what to do. 


As Quinn manipulated blocks in a virtual 3D space, the software learned a set of ordered instructions to be carried out in the real world. That order is essential for remote operations, says Quinn. To build remotely, developers need a way to communicate instructions to robotic builders on location. In the age of digital construction and industrial robotics, giving a computer a blueprint for what to build is a well-explored art. But operating on a distant object—especially under conditions that humans haven’t experienced themselves—presents challenges that only real-time communication with operators can solve.




The problem is that, in an unpredictable setting, even simple tasks require not only instruction from an operator, but constant feedback from the changing environment. Five years ago, the Swedish fiber network provider umea.net (part of the private Umeå Energy utility) took advantage of the virtual reality boom to promote its high-speed connections with the help of a viral video titled “Living with Lag: An Oculus Rift Experiment.” The video is still circulated in VR and gaming circles. 


In the experiment, volunteers donned headgear that replaced their real-time biological senses of sight and sound with camera and audio feeds of their surroundings—both set at a 3-second delay. Thus equipped, volunteers attempt to complete everyday tasks like playing ping-pong, dancing, cooking, and walking on a beach, with decidedly slapstick results.


At outer-orbit intervals, including SE4’s dream of construction projects on Mars, the limiting factor in communication speed is not an artificial delay, but the laws of physics. The shifting relative positions of Earth and Mars mean that communications between the planets—even at the speed of light—can take anywhere from 3 to 22 minutes.




A long-distance relationship

Imagine trying to manage a construction project from across an ocean without the benefit of intelligent workers: sending a ship to an unknown world with a construction crew and blueprints for a log cabin, and four months later receiving a letter back asking how to cut down a tree. The parallel problem in long-distance construction with robots, according to SE4 CEO Lochlainn Wilson, is that automation relies on predictability. “Every robot in an industrial setting today is expecting a controlled environment.”


Platforms for applying AR and VR systems to teach tasks to artificial intelligences, as SE4 does, are already proliferating in manufacturing, healthcare, and defense. But all of the related communications systems are bound by physics and, specifically, the speed of light.


The same fundamental limitation applies in space. “Our communications are light-based, whether they’re radio or optical,” says Laura Seward Forczyk, a planetary scientist and consultant for space startups. “If you’re going to Mars and you want to communicate with your robot or spacecraft there, you need to have it act semi- or mostly-independently so that it can operate without commands from Earth.”




via youtube

edited by kcontents


Semantic control

That’s exactly what SE4 aims to do. By teaching robots to group micro-movements into logical units—like all the steps to building a tower of blocks—the Tokyo-based startup lets robots make simple relational judgments that would allow them to receive a full set of instruction modules at once and carry them out in order. This sidesteps the latency issue in real-time bilateral communications that could hamstring a project or at least make progress excruciatingly slow.


The key to the platform, says Wilson, is the team’s proprietary operating software, “Semantic Control.” Just as in linguistics and philosophy, “semantics” refers to meaning itself, and meaning is the key to a robot’s ability to make even the smallest decisions on its own. “A robot can scan its environment and give [raw data] to us, but it can’t necessarily identify the objects around it and what they mean,” says Wilson.




That’s where human intelligence comes in. As part of the demonstration phase, the human operator of an SE4-controlled machine “annotates” each object in the robot’s vicinity with meaning. By labeling objects in the VR space with useful information—like which objects are building material and which are rocks—the operator helps the robot make sense of its real 3D environment before the building begins. 


Giving robots the tools to deal with a changing environment is an important step toward allowing the AI to be truly independent, but it’s only an initial step. “We’re not letting it do absolutely everything,” said Quinn. “Our robot is good at moving an object from point A to point B, but it doesn’t know the overall plan.” Wilson adds that delegating environmental awareness and raw mechanical power to separate agents is the optimal relationship for a mixed human-robot construction team; it “lets humans do what they’re good at, while robots do what they do best.”




This story was updated on 4 September 2019. 

https://spectrum.ieee.org/tech-talk/robotics/robotics-software/construction-robots-learn-to-excavate-by-mimicking-humans


KCONTENTS

댓글()