Skip to content
Snippets Groups Projects
Commit 24353716 authored by Olivier Bertrand's avatar Olivier Bertrand
Browse files

correct import

parent 2942ce97
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
# Recording animal trajectory
## Camera calibration
A camera is an optical intrument for aquiring images. Cameras are composed of a sensors (converting photon to electrical signal) and a lens (foccussing light rays on the sensors). \
Many experimental paradigm require the recording of animal motion with one or more camera. \
The experimenter is more interested by the position of the animal in his or her arena (world \
coordinate system) than by the position of the animal within the image (camera-coordinate system). Therefore, we need to transform the position of the animal in the image to its position in the world. To do so, we need to calibrate our camera. This calibration require two steps:
* remove distortion from the lens (intrinsic calibration)
* determine the position and orientation (i.e. pose) of the camera in the environment (extrinsic calibration)
### Intrinsic calibration
`With Matlab <https://de.mathworks.com/help/vision/ug/single-camera-calibrator-app.html>`_
`With Opencv <https://docs.opencv.org/3.3.1/dc/dbb/tutorial_py_calibration.html>`_
### Extrinsic calibration
To obtain the pose of the camera (i.e. the extrinsic parameters), we need
to find a transformation such that the projection of physical points matche
the points position in 3D. In other words, a ray emanating from a projected
points should cross the physical points in 3D.
The estimation of the camera pose is done by solving the PnP with Ransac.
Using Ransac decrease the bad effect that outliers may have on the pose
estimation.
#### Building your own manhattan
The manhattan should contains at least 7 towers, and not all towers should
share the same plane or line. For best results it is recommended that the
towers are visible across the entire recorded range on the image. Note,
however, that not all towers need to be visible (at least 7 though).
To create your own manhattan you have to measure all towers (with a precise
rules, or let it build by a workshop). The software need to know the position
of each tower as well. For example you can create a DataFrame containing the
tower number and its position, and save it in hdf file.
%% Cell type:code id: tags:
``` python
import pandas as pd
import numpy as np
import pkg_resources
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from navipy.arenatools.cam_calib import plot_manhattan as cplot
%matplotlib inline
manhattan_filename = 'manhattan.hdf'
manhattan_key = 'manhattan'
num_points = 27 # Number of points
manhattan_3d = pd.DataFrame(index=range(num_points),
columns=['x', 'y', 'z'])
# Then you manually fill the manhattan measurement
manhattan_3d.loc[0, :] = [-10, -10, 4.66]
manhattan_3d.loc[1, :] = [-10, -5, 4.39]
manhattan_3d.loc[2, :] = [-10, -0, 4.63]
manhattan_3d.loc[3, :] = [-10, +5, 4.38]
manhattan_3d.loc[4, :] = [-10, +10, 4.85]
manhattan_3d.loc[5, :] = [-8.09, -0.25, 10.13]
manhattan_3d.loc[6, :] = [-5, -10, 4.52]
manhattan_3d.loc[7, :] = [-5, -5, 4.57]
manhattan_3d.loc[8, :] = [-5, -0, 4.36]
manhattan_3d.loc[9, :] = [-5, +5, 4.45]
manhattan_3d.loc[10, :] = [-5, +10, 4.43]
manhattan_3d.loc[11, :] = [0, -10, 4.70]
manhattan_3d.loc[12, :] = [0, -5, 4.57]
manhattan_3d.loc[13, :] = [0, +5, 4.16]
manhattan_3d.loc[14, :] = [0, +10, 4.43]
manhattan_3d.loc[15, :] = [+5, -10, 4.70]
manhattan_3d.loc[16, :] = [+5, -5, 4.56]
manhattan_3d.loc[17, :] = [+5, -0, 4.27]
manhattan_3d.loc[18, :] = [+5, +5, 4.49]
manhattan_3d.loc[19, :] = [+5, +10, 4.50]
manhattan_3d.loc[20, :] = [+8.62, -8.38, 10.19]
manhattan_3d.loc[21, :] = [+8.55, +8.72, 10.09]
manhattan_3d.loc[22, :] = [+10, -10, 4.51]
manhattan_3d.loc[23, :] = [+10, -5, 4.33]
manhattan_3d.loc[24, :] = [+10, -0, 4.69]
manhattan_3d.loc[25, :] = [+10, +5, 4.69]
manhattan_3d.loc[26, :] = [+10, +10, 4.71]
manhattan_3d *= 10 # Because millimeters are nicer to use than centimeter
```
%% Cell type:markdown id: tags:
The manhattan can be saved as follow,
```manhattan_3d.to_hdf(manhattan_filename, manhattan_key)```
and plotted
%% Cell type:code id: tags:
``` python
cplot(manhattan_3d, unit='mm');
```
%% Output
%% Cell type:markdown id: tags:
#### Tower on the images
The second parts of the calibration involve the use of images of manhattan.
For each camera:
* acquire an image, the manhattan being placed in the arena
%% Cell type:code id: tags:
``` python
manhattan_imfile = list()
manhattan_imfile.append(pkg_resources.resource_filename(
'navipy', 'resources/sample_experiment/Doussot_2018a/20180117_cam_0.tif'))
manhattan_imfile.append(pkg_resources.resource_filename(
'navipy', 'resources/sample_experiment/Doussot_2018a/20180117_cam_2.tif'))
f, axarr = plt.subplots(1, 2, figsize=(15, 8))
for i, ax in enumerate(axarr):
image = plt.imread(manhattan_imfile[i])
# Show the manhattan image
ax.imshow(image, cmap='gray')
```
%% Output
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-3-96190864ba85> in <module>()
5 'navipy', 'resources/sample_experiment/Doussot_2018a/20180117_cam_2.tif'))
6
----> 7 f, axarr = plt.subplots(1, 2, figsize=(15, 8))
8 for i, ax in enumerate(axarr):
9 image = plt.imread(manhattan_imfile[i])
NameError: name 'plt' is not defined
%% Cell type:markdown id: tags:
* notes the pixel position of each tower
%% Cell type:code id: tags:
``` python
# a filelist for pixels coordinates of towers
filenames_manhattans = list()
filenames_manhattans.append(pkg_resources.resource_filename(
'navipy', 'resources/sample_experiment/Doussot_2018a/manhattan_cam_0_xypts.csv'))
filenames_manhattans.append(pkg_resources.resource_filename(
'navipy', 'resources/sample_experiment/Doussot_2018a/manhattan_cam_2_xypts.csv'))
# And then plot
markersize =2
f, axarr = plt.subplots(1, 2, figsize=(15, 8))
for i, ax in enumerate(axarr):
image = plt.imread(manhattan_imfile[i])
manhattan_2d = pd.read_csv(filenames_manhattans[i],
names=['x', 'y'],
header=0)
# Show the manhattan image
ax.imshow(image, cmap='gray')
toplotx = manhattan_2d.x
# Because inverted y axis in image
toploty = image.shape[0]-manhattan_2d.y
markersize = 10
# Plot marker
ax.plot(toplotx,
toploty, 'o',
markersize=markersize,
markerfacecolor="None",
markeredgecolor='red',
markeredgewidth=2)
# Plot marker label
for mi, xy in enumerate(zip(toplotx, toploty)):
if np.any(np.isnan(xy)):
continue
ax.text(xy[0]+2*markersize,
xy[1]+2*markersize,
'{}'.format(mi), color='r',
horizontalalignment='left')
```
%% Cell type:markdown id: tags:
* solve the PnPRansac
%% Cell type:markdown id: tags:
#### Intrinsic and Extrinsic together
Alternatively you can use Matlab:
`With Matlab <https://de.mathworks.com/help/vision/ref/estimateworldcamerapose.html>`
%% Cell type:markdown id: tags:
## Triangulation
In computer vision triangulation refers to the process of determining a point in 3D space given its projections onto two, or more, images. In order to solve this problem it is necessary to know the parameters of the camera projection function from 3D to 2D for the cameras involved, in the simplest case represented by the camera matrices. Triangulation is sometimes also referred to as reconstruction.
The triangulation problem is in theory trivial. Since each point in an image corresponds to a line in 3D space, all points on the line in 3D are projected to the point in the image. If a pair of corresponding points in two, or more images, can be found it must be the case that they are the projection of a common 3D point x. The set of lines generated by the image points must intersect at x (3D point).
Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press
### Stereo triangulation
### Multi-view triangulation
#### Pair-wise combination
Triangulating projected points from 2 or more camera can be done by doing pairwise triangulation, and then calculation of the median.
We need to load n, here n is the number of camera, filename containing the x and y coordinates on the image.
%% Cell type:code id: tags:
``` python
import numpy as np
from navipy.arenatools.triangulate import triangulate_ncam_pairwise
from navipy.io.ivfile import load as ivload
from navipy.io.opencv import load_cameras_calibration
import pkg_resources
# Use the trafile from the resources
# You can adapt this code, by changing trajfile
# with your own trajectory file
trajfile_side = pkg_resources.resource_filename(
'navipy',
'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001_side.tra')
trajfile_top = pkg_resources.resource_filename(
'navipy',
'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001_top.tra')
calibfile = pkg_resources.resource_filename(
'navipy',
'resources/sample_experiment/Lobecke_JEB_2018/fullcalib_fromtra.xml')
```
%% Cell type:markdown id: tags:
Load the calibration from an xml format (opencv like format)
%% Cell type:code id: tags:
``` python
calib = load_cameras_calibration(calibfile)
```
%% Cell type:markdown id: tags:
Load the trajectory as seen from the top and side camera
%% Cell type:code id: tags:
``` python
trajdata_side = ivload(trajfile_side)
trajdata_top = ivload(trajfile_top)
```
%% Cell type:markdown id: tags:
Format the data for triangulation
%% Cell type:code id: tags:
``` python
nframes = trajdata_side.shape[0]
ncameras = len(calib)
ncoordinates = 2 # Always two because a camera is a plane
mark_i = 0
pts_cam = np.zeros((ncameras, nframes, ncoordinates))
#First camera
pts_cam[1,:,0]=trajdata_top.loc[:, mark_i].x
pts_cam[1,:,1]=trajdata_top.loc[:, mark_i].y
#Second camera
pts_cam[0,:,0]=trajdata_side.loc[:, mark_i].x
pts_cam[0,:,1]=trajdata_side.loc[:, mark_i].y
```
%% Cell type:markdown id: tags:
and then triangulate.
%% Cell type:code id: tags:
``` python
point_3d, p3dunscaled, validcmb = triangulate_ncam_pairwise(calib, pts_cam)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
plt.plot(point_3d[:,0],point_3d[:,1],point_3d[:,2])
```
%% Output
[<mpl_toolkits.mplot3d.art3d.Line3D at 0xa69bbe0>]
%% Cell type:markdown id: tags:
#### Single value decomposition
## Conversion to navipy trajectories
### From Matlab to navipy
%% Cell type:code id: tags:
``` python
from scipy.io import loadmat
import numpy as np
import os
from navipy.tools.trajectory import Trajectory
from navipy.trajectories import Trajectory
import pkg_resources
# Use the trafile from the resources
# You can adapt this code, by changing trajfile
# with your own trajectory file
trajfile = pkg_resources.resource_filename(
'navipy',
'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001.mat')
csvtrajfile, _ = os.path.splitext(trajfile)
csvtrajfile = csvtrajfile+'.csv'
mymat = loadmat(trajfile)
# matlab files are loaded in a dictionary
# we need to identify, under which key the trajectory has been saved
print(mymat.keys())
key = 'trajectory'
# Arrays are placed in a tupple. We need to access the level
# of the array itself.
mymat = mymat[key][0][0][0]
# The array should be a numpy array, and therefore as the .shape function
# to display the array size:
print(mymat.shape)
# In this example the array has 7661 rows and 4 columns
# the columns are: x,y,z, yaw. Therefore pitch and roll were assumed to be constant (here null)
# We can therefore init a trajectory with the convention for rotation
# often yaw-pitch-roll (i.e. 'rzyx') and with indeces the number of sampling points we have
# in the trajectory
rotconvention = 'rzyx'
indeces = np.arange(0,mymat.shape[0])
mytraj = Trajectory(rotconv = rotconvention, indeces=indeces)
# We now can assign the values
mytraj.x = mymat[:,0]
mytraj.y = mymat[:,1]
mytraj.z = mymat[:,2]
mytraj.alpha_0 = mymat[:,3]
mytraj.alpha_1 = 0
mytraj.alpha_2 = 0
# We can then save mytraj as csv file for example
mytraj.to_csv(csvtrajfile)
mytraj.head()
```
%% Output
dict_keys(['__header__', '__version__', '__globals__', 'trajectory', 'nests', 'cylinders'])
(7661, 4)
location rzyx
x y z alpha_0 alpha_1 alpha_2
0 3.056519 -214.990482 9.330593 2.79751 0 0
1 4.611665 -215.020314 8.424138 2.80863 0 0
2 4.556650 -214.593236 9.185016 2.81407 0 0
3 4.643091 -213.829769 10.542035 2.82704 0 0
4 4.647302 -214.431592 7.461187 2.82896 0 0
%% Cell type:markdown id: tags:
### From csv to navipy
Navipy can read csv files with 7 columns and a two line headers
<table>
<tr>
<td>location</td>
<td>location</td>
<td>location</td>
<td>rzyx</td>
<td>rzyx</td>
<td>rzyx</td>
</tr>
<tr>
<td>x</td>
<td>y</td>
<td>z</td>
<td>alpha_0</td>
<td>alpha_1</td>
<td>alpha_2</td>
</tr>
</table>
But you may have csv file with a different format.
Here we are going to show, how to convert your csv file format to the one required by navipy
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment