Hydromel
Structure de mise en forme 2 colonnes

3D Vision and Visual Servoing for MEMS Micromanipulation and Microassembly by CNRS

Fig. 1: Developed 5-dof microassembly workcell.

A – Home made microassembly workcell
This MEMS microassembly station had been developed in CNRS laboratory (see, Fig. 1). It includes a robotic system with five high accuracy degrees of freedom (dof) a 3-dof positioning platform: two linear stages i.e. xy and one rotating stage θ and a 2-dof micromanipulator: one vertical linear stage z and one rotating stage mounted at 45° degrees from the vertical one), a 4-dof microhandling system which allows open-and-close motions as well as up-and-down motions. It is based on piezoelectric actuators which consist of  two parallel piezoceramic PZT PIC 151 bimorphs. The imaging system is a video stereo microscope of the type LEICA MZ 16 A.

 

 

Fig. 2: Full-automatic MEMS manipulation using multiple scale visual servoing

B – High precise multiple scale visual servoing
This work concerns the design of a new multiple scale vision-based control. This method is based on a monoview and multiple scale image-based visual servoing using dynamic zoom and focus. The micromanipulation tasks achieved (see, Fig. 2) with the developed approach present the following accuracy: 1.4 μm for the positioning error and 0.5◦ for the orientation error.

 

 

C– 3D MEMS assembly using 3D CAD model-based tracking a 3D visual servoing

Fig.3: Application of the tracker on a micropart. The points (p) are used to estimate the
Fig. 4: Sequence of images captured during the microassembly process. The right image illustrates the assembled MEMS inside a SEM.

 

 

Fig.5: Sequence of images captured during the microassembly process.

D – Inside Scanning Electron Microscope NEMS Tracking
The 3D tracking and 3D visual servoing methods are also tested in the case of SEM images in aim to perform NEMS nanomanipulation and nanoassembly. The obtained results are shown in Fig. 5 which illustrates a sequence with CAD model projected on the micropart. A pallet of silicon micropart of 40 µm x 40 µm x 5 µm is placed on the SEM.

 

 

 

 

 

Fig.6: The 3D model of the gripper tips computed using the depth-from-focus method.

D – 3D Vision-based depth-from-focus approach
Because the limitation of the depth-of-field of the optical microscope, it is impossible to completely perceive a 3D object. The focus must be continuously adjusted to view the region of interest (ROI). To compensate this drawback, the proposed depth-from-focus method can be used to get a full 3D representation of the object. That method consists in computing the focused area of every image of the scene image sequence. Each image is acquired at a different focus (corresponding to a depth) from the same point of view. It can be noticed that a video microscope with a motorized focus enables to obtain equidistant focal planes. Next, the focused areas are stacked up according to their position to give the 3D reconstruction of the scene (see, Fig. 6).

 

  

Work realized within the HYDROMEL project contract NMP2-CT-2006-026622
Contact Nadine Piat
Institut FEMTO-ST UMR CNRS 6174 - UFC / ENSMM / UTBM
Départ. Automatique et Systèmes Micro-Mécatroniques (AS2M)
E-mail : nadine.piat@ens2m.fr