mm

Teaching robots about tools with neurofluorescence fields (NeRF)

New research from the University of Michigan offers a way for robots to understand the mechanics of tools, and other articulated objects in the real world, by creating Neural Radiance Fields (NeRF) objects that demonstrate the way these objects move, allowing the robot to interact. With them and using them without tedious custom preconfiguration.

By using well-known source references for the tools’ internal motion (or any object with a suitable reference), NARF22 can compile a realistic approximation of the tool, its range of motion, and the type of operation. Source: https://progress.eecs.umich.edu/projects/narf/

Robots that are required to do more than just avoid pedestrians or perform pre-programmed actions (which potentially non-reusable datasets would have been labeled and trained on a computation) need this kind of adaptability if they are going to work with the same materials And the things that the rest of us have to deal with.

So far, there have been a number of obstacles to imparting this kind of versatility to robotic systems. These include the paucity of viable data sets, many of which are characterized by a very limited number of objects; The huge expense involved in creating the kind of realistic and network-based 3D models that can help bots learn tools in a real-world context; and the unrealistic quality of these data sets that may actually be up to the challenge, causing objects to appear disjointed from what the robot perceives in the world around it, and training it to search for a cartoon-like object that will never appear in reality.

To address this, researchers in Michigan, who paper titled NARF22: neural articular fluorescence fields to display perceived compositiondeveloped a two-stage pipeline to generate NeRF-based articulated objects that have a “real world” appearance, which includes the movement and constraints of any given articulated object.

Although it appears more complex, the two primary phases of the NARF22 pipeline involve rendering static parts of the moving tools, and then assembling these into a second dataset that is informed by the motion parameters that these parts possess, relative to each other.  Source: https://arxiv.org/pdf/2210.1166.pdf

Although it appears more complex, the two primary phases of the NARF22 pipeline involve rendering static parts of the moving tools, and then assembling these into a second dataset that is informed by the motion parameters that these parts possess, relative to each other. Source: https://arxiv.org/pdf/2210.1166.pdf

The system is called neural articular field of fluorescence — or NARF22, to distinguish it from another similar project.

NARF22

Determining whether or not an unknown organism is likely to be expressed requires an almost unimaginable amount of prior knowledge of the human pattern. For example, if you’ve never seen a closed drawer before, it might look like any other type of decorative panel – it won’t happen until you actually open a drawer as you embed the ‘drawer’ as a hinged object with a single axis of movement (forward and back).

Therefore, NARF22 is not intended to be an exploratory system for picking things out and seeing if they contain actionable moving parts – ape-like behavior that entails a number of potentially catastrophic scenarios. Instead, the framework is based on the knowledge available in Universal Android Description Format (URDF) – an open source XML-based format that is widely applicable and task-appropriate. A URDF file will contain usable parameters for movement in an object, as well as descriptions and other addressable aspects of parts of the object.

In traditional pipelines, it is mainly necessary to describe the expression capabilities of an object, and to name the related common values. This is not a cheap or easily scalable job. Instead, the NaRF22 workflow displays the individual components of the object before ‘assembling’ each static component into a NeRF-based representation, with knowledge of the motion parameters provided by the URDF.

In the second stage of the process, a completely new viewer is created that includes all the parts. Although it might be easier to simply assemble the individual parts at an early stage and skip this later step, the researchers note that the final model – which was trained on an NVIDIA RTX 3080 GPU under an AMD 5600X CPU – had lower computational requirements during reverse propagation. From such a sudden and premature assembly.

In addition, the second stage model runs twice as fast as “forced” sequential compilation, and any secondary applications that might need to use information about static parts of the model would not need their own access to the URDF information, because this is already built into the final stage viewer.

Data and experiments

The researchers conducted a number of experiments to test NARF22: one to assess the qualitative presentation of each object’s composition and placement; A quantitative test to compare the results presented with similar views seen by robots in the real world; and a demonstration of composition estimation and refinement challenge of 6 DOF (depth of field) that used NARF22 to perform gradient-based optimization.

Training data was obtained from Progress Tools A dataset from a previous paper prepared by several authors of the current work. The Progress Tools contain about six thousand RGB-D images (including depth information necessary for the robots to see) at a resolution of 640 x 480. The scenes used included eight hand tools, divided into their component parts, complete with grid models and information on the kinematic properties of objects (eg. , the way it is designed to move it, and the parameters of that motion).

The progress tool dataset features four articulated tools.  The images above are NeRF-based renderings of the NARF22.

The progress tool dataset features four articulated tools. The images above are NeRF-based renderings of the NARF22.

For this experiment, a final, configurable model was trained using linemen’s pliers, long-nose pliers, and a clip (see image above). The training data contained one configuration of the clamp, and one for each pliers.

The implementation of NARF22 is based on FastNeRF, with modification of the input parameters to focus on the sequential and spatially encoded position of the instruments. FastNeRF uses Multilayer Perception (MLP) paired with a voxelized sampling mechanism (voxels are essentially pixels, but with full 3D coordinates, so they can operate in 3D space).

For the qualitative test, the researchers note that there are several clogged parts of the synapse (for example, the central spine, which cannot be known or guessed by observing the object, but only by interacting with it, and that the system has difficulty creating this “non-” geometry known”.

Qualitative perceptions of tools.

Qualitative perceptions of tools.

By contrast, the tongs were able to generalize well to the new configurations (that is, the extensions and movements of their parts that fall within the parameters of the URDF, but were not explicitly addressed in the model’s training material.

The researchers note, however, that tong labeling errors reduced the quality of tool tipping, which negatively affected presentations—a problem with broader concerns about logistical labeling, budgeting and accuracy in computer vision research, rather than any procedural shortcomings. In the NARF22 pipeline.

Display accuracy test results.

Display accuracy test results.

For configuration estimation tests, the researchers performed mode optimization and config estimation from an initial “rigid” position, avoiding any of the caching or other accelerating solutions used by FastNeRF itself.

They then trained 17 well-organized scenes from the progression instrument test suite (set aside during training), which were run through 150 iterations of gradient descent improvement under the Adam enhancer. This procedure restored the composition’s rating “extremely well,” according to the researchers.

Results from a configuration estimation test.

Results from a configuration estimation test.

First published on October 5, 2022.

#Teaching #robots #tools #neurofluorescence #fields #NeRF

Leave a Comment

Your email address will not be published. Required fields are marked *