Skip to content
Snippets Groups Projects
Commit eeb035a4 authored by Olivier Bertrand's avatar Olivier Bertrand
Browse files

Update lolliplot

parent 24353716
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Recording animal trajectory # Recording animal trajectory
## Camera calibration ## Camera calibration
A camera is an optical intrument for aquiring images. Cameras are composed of a sensors (converting photon to electrical signal) and a lens (foccussing light rays on the sensors). \ A camera is an optical intrument for aquiring images. Cameras are composed of a sensors (converting photon to electrical signal) and a lens (foccussing light rays on the sensors). \
Many experimental paradigm require the recording of animal motion with one or more camera. \ Many experimental paradigm require the recording of animal motion with one or more camera. \
The experimenter is more interested by the position of the animal in his or her arena (world \ The experimenter is more interested by the position of the animal in his or her arena (world \
coordinate system) than by the position of the animal within the image (camera-coordinate system). Therefore, we need to transform the position of the animal in the image to its position in the world. To do so, we need to calibrate our camera. This calibration require two steps: coordinate system) than by the position of the animal within the image (camera-coordinate system). Therefore, we need to transform the position of the animal in the image to its position in the world. To do so, we need to calibrate our camera. This calibration require two steps:
* remove distortion from the lens (intrinsic calibration) * remove distortion from the lens (intrinsic calibration)
* determine the position and orientation (i.e. pose) of the camera in the environment (extrinsic calibration) * determine the position and orientation (i.e. pose) of the camera in the environment (extrinsic calibration)
### Intrinsic calibration ### Intrinsic calibration
`With Matlab <https://de.mathworks.com/help/vision/ug/single-camera-calibrator-app.html>`_ `With Matlab <https://de.mathworks.com/help/vision/ug/single-camera-calibrator-app.html>`_
`With Opencv <https://docs.opencv.org/3.3.1/dc/dbb/tutorial_py_calibration.html>`_ `With Opencv <https://docs.opencv.org/3.3.1/dc/dbb/tutorial_py_calibration.html>`_
### Extrinsic calibration ### Extrinsic calibration
To obtain the pose of the camera (i.e. the extrinsic parameters), we need To obtain the pose of the camera (i.e. the extrinsic parameters), we need
to find a transformation such that the projection of physical points matche to find a transformation such that the projection of physical points matche
the points position in 3D. In other words, a ray emanating from a projected the points position in 3D. In other words, a ray emanating from a projected
points should cross the physical points in 3D. points should cross the physical points in 3D.
The estimation of the camera pose is done by solving the PnP with Ransac. The estimation of the camera pose is done by solving the PnP with Ransac.
Using Ransac decrease the bad effect that outliers may have on the pose Using Ransac decrease the bad effect that outliers may have on the pose
estimation. estimation.
#### Building your own manhattan #### Building your own manhattan
The manhattan should contains at least 7 towers, and not all towers should The manhattan should contains at least 7 towers, and not all towers should
share the same plane or line. For best results it is recommended that the share the same plane or line. For best results it is recommended that the
towers are visible across the entire recorded range on the image. Note, towers are visible across the entire recorded range on the image. Note,
however, that not all towers need to be visible (at least 7 though). however, that not all towers need to be visible (at least 7 though).
To create your own manhattan you have to measure all towers (with a precise To create your own manhattan you have to measure all towers (with a precise
rules, or let it build by a workshop). The software need to know the position rules, or let it build by a workshop). The software need to know the position
of each tower as well. For example you can create a DataFrame containing the of each tower as well. For example you can create a DataFrame containing the
tower number and its position, and save it in hdf file. tower number and its position, and save it in hdf file.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import pandas as pd import pandas as pd
import numpy as np import numpy as np
import pkg_resources import pkg_resources
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d import Axes3D
from navipy.arenatools.cam_calib import plot_manhattan as cplot from navipy.arenatools.cam_calib import plot_manhattan as cplot
%matplotlib inline %matplotlib inline
manhattan_filename = 'manhattan.hdf' manhattan_filename = 'manhattan.hdf'
manhattan_key = 'manhattan' manhattan_key = 'manhattan'
num_points = 27 # Number of points num_points = 27 # Number of points
manhattan_3d = pd.DataFrame(index=range(num_points), manhattan_3d = pd.DataFrame(index=range(num_points),
columns=['x', 'y', 'z']) columns=['x', 'y', 'z'])
# Then you manually fill the manhattan measurement # Then you manually fill the manhattan measurement
manhattan_3d.loc[0, :] = [-10, -10, 4.66] manhattan_3d.loc[0, :] = [-10, -10, 4.66]
manhattan_3d.loc[1, :] = [-10, -5, 4.39] manhattan_3d.loc[1, :] = [-10, -5, 4.39]
manhattan_3d.loc[2, :] = [-10, -0, 4.63] manhattan_3d.loc[2, :] = [-10, -0, 4.63]
manhattan_3d.loc[3, :] = [-10, +5, 4.38] manhattan_3d.loc[3, :] = [-10, +5, 4.38]
manhattan_3d.loc[4, :] = [-10, +10, 4.85] manhattan_3d.loc[4, :] = [-10, +10, 4.85]
manhattan_3d.loc[5, :] = [-8.09, -0.25, 10.13] manhattan_3d.loc[5, :] = [-8.09, -0.25, 10.13]
manhattan_3d.loc[6, :] = [-5, -10, 4.52] manhattan_3d.loc[6, :] = [-5, -10, 4.52]
manhattan_3d.loc[7, :] = [-5, -5, 4.57] manhattan_3d.loc[7, :] = [-5, -5, 4.57]
manhattan_3d.loc[8, :] = [-5, -0, 4.36] manhattan_3d.loc[8, :] = [-5, -0, 4.36]
manhattan_3d.loc[9, :] = [-5, +5, 4.45] manhattan_3d.loc[9, :] = [-5, +5, 4.45]
manhattan_3d.loc[10, :] = [-5, +10, 4.43] manhattan_3d.loc[10, :] = [-5, +10, 4.43]
manhattan_3d.loc[11, :] = [0, -10, 4.70] manhattan_3d.loc[11, :] = [0, -10, 4.70]
manhattan_3d.loc[12, :] = [0, -5, 4.57] manhattan_3d.loc[12, :] = [0, -5, 4.57]
manhattan_3d.loc[13, :] = [0, +5, 4.16] manhattan_3d.loc[13, :] = [0, +5, 4.16]
manhattan_3d.loc[14, :] = [0, +10, 4.43] manhattan_3d.loc[14, :] = [0, +10, 4.43]
manhattan_3d.loc[15, :] = [+5, -10, 4.70] manhattan_3d.loc[15, :] = [+5, -10, 4.70]
manhattan_3d.loc[16, :] = [+5, -5, 4.56] manhattan_3d.loc[16, :] = [+5, -5, 4.56]
manhattan_3d.loc[17, :] = [+5, -0, 4.27] manhattan_3d.loc[17, :] = [+5, -0, 4.27]
manhattan_3d.loc[18, :] = [+5, +5, 4.49] manhattan_3d.loc[18, :] = [+5, +5, 4.49]
manhattan_3d.loc[19, :] = [+5, +10, 4.50] manhattan_3d.loc[19, :] = [+5, +10, 4.50]
manhattan_3d.loc[20, :] = [+8.62, -8.38, 10.19] manhattan_3d.loc[20, :] = [+8.62, -8.38, 10.19]
manhattan_3d.loc[21, :] = [+8.55, +8.72, 10.09] manhattan_3d.loc[21, :] = [+8.55, +8.72, 10.09]
manhattan_3d.loc[22, :] = [+10, -10, 4.51] manhattan_3d.loc[22, :] = [+10, -10, 4.51]
manhattan_3d.loc[23, :] = [+10, -5, 4.33] manhattan_3d.loc[23, :] = [+10, -5, 4.33]
manhattan_3d.loc[24, :] = [+10, -0, 4.69] manhattan_3d.loc[24, :] = [+10, -0, 4.69]
manhattan_3d.loc[25, :] = [+10, +5, 4.69] manhattan_3d.loc[25, :] = [+10, +5, 4.69]
manhattan_3d.loc[26, :] = [+10, +10, 4.71] manhattan_3d.loc[26, :] = [+10, +10, 4.71]
manhattan_3d *= 10 # Because millimeters are nicer to use than centimeter manhattan_3d *= 10 # Because millimeters are nicer to use than centimeter
``` ```
%% Output
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-c4435f428126> in <module>()
----> 1 import pandas as pd
2 import numpy as np
3 import pkg_resources
4 import matplotlib.pyplot as plt
5 from mpl_toolkits.mplot3d import Axes3D
ModuleNotFoundError: No module named 'pandas'
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The manhattan can be saved as follow, The manhattan can be saved as follow,
```manhattan_3d.to_hdf(manhattan_filename, manhattan_key)``` ```manhattan_3d.to_hdf(manhattan_filename, manhattan_key)```
and plotted and plotted
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
cplot(manhattan_3d, unit='mm'); cplot(manhattan_3d, unit='mm');
``` ```
%% Output
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Tower on the images #### Tower on the images
The second parts of the calibration involve the use of images of manhattan. The second parts of the calibration involve the use of images of manhattan.
For each camera: For each camera:
* acquire an image, the manhattan being placed in the arena * acquire an image, the manhattan being placed in the arena
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
manhattan_imfile = list() manhattan_imfile = list()
manhattan_imfile.append(pkg_resources.resource_filename( manhattan_imfile.append(pkg_resources.resource_filename(
'navipy', 'resources/sample_experiment/Doussot_2018a/20180117_cam_0.tif')) 'navipy', 'resources/sample_experiment/Doussot_2018a/20180117_cam_0.tif'))
manhattan_imfile.append(pkg_resources.resource_filename( manhattan_imfile.append(pkg_resources.resource_filename(
'navipy', 'resources/sample_experiment/Doussot_2018a/20180117_cam_2.tif')) 'navipy', 'resources/sample_experiment/Doussot_2018a/20180117_cam_2.tif'))
f, axarr = plt.subplots(1, 2, figsize=(15, 8)) f, axarr = plt.subplots(1, 2, figsize=(15, 8))
for i, ax in enumerate(axarr): for i, ax in enumerate(axarr):
image = plt.imread(manhattan_imfile[i]) image = plt.imread(manhattan_imfile[i])
# Show the manhattan image # Show the manhattan image
ax.imshow(image, cmap='gray') ax.imshow(image, cmap='gray')
``` ```
%% Output
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-3-96190864ba85> in <module>()
5 'navipy', 'resources/sample_experiment/Doussot_2018a/20180117_cam_2.tif'))
6
----> 7 f, axarr = plt.subplots(1, 2, figsize=(15, 8))
8 for i, ax in enumerate(axarr):
9 image = plt.imread(manhattan_imfile[i])
NameError: name 'plt' is not defined
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
* notes the pixel position of each tower * notes the pixel position of each tower
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# a filelist for pixels coordinates of towers # a filelist for pixels coordinates of towers
filenames_manhattans = list() filenames_manhattans = list()
filenames_manhattans.append(pkg_resources.resource_filename( filenames_manhattans.append(pkg_resources.resource_filename(
'navipy', 'resources/sample_experiment/Doussot_2018a/manhattan_cam_0_xypts.csv')) 'navipy', 'resources/sample_experiment/Doussot_2018a/manhattan_cam_0_xypts.csv'))
filenames_manhattans.append(pkg_resources.resource_filename( filenames_manhattans.append(pkg_resources.resource_filename(
'navipy', 'resources/sample_experiment/Doussot_2018a/manhattan_cam_2_xypts.csv')) 'navipy', 'resources/sample_experiment/Doussot_2018a/manhattan_cam_2_xypts.csv'))
# And then plot # And then plot
markersize =2 markersize =2
f, axarr = plt.subplots(1, 2, figsize=(15, 8)) f, axarr = plt.subplots(1, 2, figsize=(15, 8))
for i, ax in enumerate(axarr): for i, ax in enumerate(axarr):
image = plt.imread(manhattan_imfile[i]) image = plt.imread(manhattan_imfile[i])
manhattan_2d = pd.read_csv(filenames_manhattans[i], manhattan_2d = pd.read_csv(filenames_manhattans[i],
names=['x', 'y'], names=['x', 'y'],
header=0) header=0)
# Show the manhattan image # Show the manhattan image
ax.imshow(image, cmap='gray') ax.imshow(image, cmap='gray')
toplotx = manhattan_2d.x toplotx = manhattan_2d.x
# Because inverted y axis in image # Because inverted y axis in image
toploty = image.shape[0]-manhattan_2d.y toploty = image.shape[0]-manhattan_2d.y
markersize = 10 markersize = 10
# Plot marker # Plot marker
ax.plot(toplotx, ax.plot(toplotx,
toploty, 'o', toploty, 'o',
markersize=markersize, markersize=markersize,
markerfacecolor="None", markerfacecolor="None",
markeredgecolor='red', markeredgecolor='red',
markeredgewidth=2) markeredgewidth=2)
# Plot marker label # Plot marker label
for mi, xy in enumerate(zip(toplotx, toploty)): for mi, xy in enumerate(zip(toplotx, toploty)):
if np.any(np.isnan(xy)): if np.any(np.isnan(xy)):
continue continue
ax.text(xy[0]+2*markersize, ax.text(xy[0]+2*markersize,
xy[1]+2*markersize, xy[1]+2*markersize,
'{}'.format(mi), color='r', '{}'.format(mi), color='r',
horizontalalignment='left') horizontalalignment='left')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
* solve the PnPRansac * solve the PnPRansac
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Intrinsic and Extrinsic together #### Intrinsic and Extrinsic together
Alternatively you can use Matlab: Alternatively you can use Matlab:
`With Matlab <https://de.mathworks.com/help/vision/ref/estimateworldcamerapose.html>` `With Matlab <https://de.mathworks.com/help/vision/ref/estimateworldcamerapose.html>`
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Triangulation ## Triangulation
In computer vision triangulation refers to the process of determining a point in 3D space given its projections onto two, or more, images. In order to solve this problem it is necessary to know the parameters of the camera projection function from 3D to 2D for the cameras involved, in the simplest case represented by the camera matrices. Triangulation is sometimes also referred to as reconstruction. In computer vision triangulation refers to the process of determining a point in 3D space given its projections onto two, or more, images. In order to solve this problem it is necessary to know the parameters of the camera projection function from 3D to 2D for the cameras involved, in the simplest case represented by the camera matrices. Triangulation is sometimes also referred to as reconstruction.
The triangulation problem is in theory trivial. Since each point in an image corresponds to a line in 3D space, all points on the line in 3D are projected to the point in the image. If a pair of corresponding points in two, or more images, can be found it must be the case that they are the projection of a common 3D point x. The set of lines generated by the image points must intersect at x (3D point). The triangulation problem is in theory trivial. Since each point in an image corresponds to a line in 3D space, all points on the line in 3D are projected to the point in the image. If a pair of corresponding points in two, or more images, can be found it must be the case that they are the projection of a common 3D point x. The set of lines generated by the image points must intersect at x (3D point).
Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press
### Stereo triangulation ### Stereo triangulation
### Multi-view triangulation ### Multi-view triangulation
#### Pair-wise combination #### Pair-wise combination
Triangulating projected points from 2 or more camera can be done by doing pairwise triangulation, and then calculation of the median. Triangulating projected points from 2 or more camera can be done by doing pairwise triangulation, and then calculation of the median.
We need to load n, here n is the number of camera, filename containing the x and y coordinates on the image. We need to load n, here n is the number of camera, filename containing the x and y coordinates on the image.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import numpy as np import numpy as np
from navipy.arenatools.triangulate import triangulate_ncam_pairwise from navipy.arenatools.triangulate import triangulate_ncam_pairwise
from navipy.io.ivfile import load as ivload from navipy.io.ivfile import load as ivload
from navipy.io.opencv import load_cameras_calibration from navipy.io.opencv import load_cameras_calibration
import pkg_resources import pkg_resources
# Use the trafile from the resources # Use the trafile from the resources
# You can adapt this code, by changing trajfile # You can adapt this code, by changing trajfile
# with your own trajectory file # with your own trajectory file
trajfile_side = pkg_resources.resource_filename( trajfile_side = pkg_resources.resource_filename(
'navipy', 'navipy',
'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001_side.tra') 'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001_side.tra')
trajfile_top = pkg_resources.resource_filename( trajfile_top = pkg_resources.resource_filename(
'navipy', 'navipy',
'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001_top.tra') 'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001_top.tra')
calibfile = pkg_resources.resource_filename( calibfile = pkg_resources.resource_filename(
'navipy', 'navipy',
'resources/sample_experiment/Lobecke_JEB_2018/fullcalib_fromtra.xml') 'resources/sample_experiment/Lobecke_JEB_2018/fullcalib_fromtra.xml')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Load the calibration from an xml format (opencv like format) Load the calibration from an xml format (opencv like format)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
calib = load_cameras_calibration(calibfile) calib = load_cameras_calibration(calibfile)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Load the trajectory as seen from the top and side camera Load the trajectory as seen from the top and side camera
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
trajdata_side = ivload(trajfile_side) trajdata_side = ivload(trajfile_side)
trajdata_top = ivload(trajfile_top) trajdata_top = ivload(trajfile_top)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Format the data for triangulation Format the data for triangulation
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
nframes = trajdata_side.shape[0] nframes = trajdata_side.shape[0]
ncameras = len(calib) ncameras = len(calib)
ncoordinates = 2 # Always two because a camera is a plane ncoordinates = 2 # Always two because a camera is a plane
mark_i = 0 mark_i = 0
pts_cam = np.zeros((ncameras, nframes, ncoordinates)) pts_cam = np.zeros((ncameras, nframes, ncoordinates))
#First camera #First camera
pts_cam[1,:,0]=trajdata_top.loc[:, mark_i].x pts_cam[1,:,0]=trajdata_top.loc[:, mark_i].x
pts_cam[1,:,1]=trajdata_top.loc[:, mark_i].y pts_cam[1,:,1]=trajdata_top.loc[:, mark_i].y
#Second camera #Second camera
pts_cam[0,:,0]=trajdata_side.loc[:, mark_i].x pts_cam[0,:,0]=trajdata_side.loc[:, mark_i].x
pts_cam[0,:,1]=trajdata_side.loc[:, mark_i].y pts_cam[0,:,1]=trajdata_side.loc[:, mark_i].y
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
and then triangulate. and then triangulate.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
point_3d, p3dunscaled, validcmb = triangulate_ncam_pairwise(calib, pts_cam) point_3d, p3dunscaled, validcmb = triangulate_ncam_pairwise(calib, pts_cam)
fig = plt.figure() fig = plt.figure()
ax = fig.add_subplot(111, projection='3d') ax = fig.add_subplot(111, projection='3d')
plt.plot(point_3d[:,0],point_3d[:,1],point_3d[:,2]) plt.plot(point_3d[:,0],point_3d[:,1],point_3d[:,2])
``` ```
%% Output
[<mpl_toolkits.mplot3d.art3d.Line3D at 0xa69bbe0>]
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Single value decomposition #### Single value decomposition
## Conversion to navipy trajectories ## Conversion to navipy trajectories
### From Matlab to navipy ### From Matlab to navipy
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from scipy.io import loadmat from scipy.io import loadmat
import numpy as np import numpy as np
import os import os
from navipy.trajectories import Trajectory from navipy.trajectories import Trajectory
import pkg_resources import pkg_resources
# Use the trafile from the resources # Use the trafile from the resources
# You can adapt this code, by changing trajfile # You can adapt this code, by changing trajfile
# with your own trajectory file # with your own trajectory file
trajfile = pkg_resources.resource_filename( trajfile = pkg_resources.resource_filename(
'navipy', 'navipy',
'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001.mat') 'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001.mat')
csvtrajfile, _ = os.path.splitext(trajfile) csvtrajfile, _ = os.path.splitext(trajfile)
csvtrajfile = csvtrajfile+'.csv' csvtrajfile = csvtrajfile+'.csv'
mymat = loadmat(trajfile) mymat = loadmat(trajfile)
# matlab files are loaded in a dictionary # matlab files are loaded in a dictionary
# we need to identify, under which key the trajectory has been saved # we need to identify, under which key the trajectory has been saved
print(mymat.keys()) print(mymat.keys())
key = 'trajectory' key = 'trajectory'
# Arrays are placed in a tupple. We need to access the level # Arrays are placed in a tupple. We need to access the level
# of the array itself. # of the array itself.
mymat = mymat[key][0][0][0] mymat = mymat[key][0][0][0]
# The array should be a numpy array, and therefore as the .shape function # The array should be a numpy array, and therefore as the .shape function
# to display the array size: # to display the array size:
print(mymat.shape) print(mymat.shape)
# In this example the array has 7661 rows and 4 columns # In this example the array has 7661 rows and 4 columns
# the columns are: x,y,z, yaw. Therefore pitch and roll were assumed to be constant (here null) # the columns are: x,y,z, yaw. Therefore pitch and roll were assumed to be constant (here null)
# We can therefore init a trajectory with the convention for rotation # We can therefore init a trajectory with the convention for rotation
# often yaw-pitch-roll (i.e. 'rzyx') and with indeces the number of sampling points we have # often yaw-pitch-roll (i.e. 'rzyx') and with indeces the number of sampling points we have
# in the trajectory # in the trajectory
rotconvention = 'rzyx' rotconvention = 'rzyx'
indeces = np.arange(0,mymat.shape[0]) indeces = np.arange(0,mymat.shape[0])
mytraj = Trajectory(rotconv = rotconvention, indeces=indeces) mytraj = Trajectory(rotconv = rotconvention, indeces=indeces)
# We now can assign the values # We now can assign the values
mytraj.x = mymat[:,0] mytraj.x = mymat[:,0]
mytraj.y = mymat[:,1] mytraj.y = mymat[:,1]
mytraj.z = mymat[:,2] mytraj.z = mymat[:,2]
mytraj.alpha_0 = mymat[:,3] mytraj.alpha_0 = mymat[:,3]
mytraj.alpha_1 = 0 mytraj.alpha_1 = 0
mytraj.alpha_2 = 0 mytraj.alpha_2 = 0
# We can then save mytraj as csv file for example # We can then save mytraj as csv file for example
mytraj.to_csv(csvtrajfile) mytraj.to_csv(csvtrajfile)
mytraj.head() mytraj.head()
``` ```
%% Output
dict_keys(['__header__', '__version__', '__globals__', 'trajectory', 'nests', 'cylinders'])
(7661, 4)
location rzyx
x y z alpha_0 alpha_1 alpha_2
0 3.056519 -214.990482 9.330593 2.79751 0 0
1 4.611665 -215.020314 8.424138 2.80863 0 0
2 4.556650 -214.593236 9.185016 2.81407 0 0
3 4.643091 -213.829769 10.542035 2.82704 0 0
4 4.647302 -214.431592 7.461187 2.82896 0 0
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### From csv to navipy ### From csv to navipy
Navipy can read csv files with 7 columns and a two line headers Navipy can read csv files with 7 columns and a two line headers
<table> <table>
<tr> <tr>
<td>location</td> <td>location</td>
<td>location</td> <td>location</td>
<td>location</td> <td>location</td>
<td>rzyx</td> <td>rzyx</td>
<td>rzyx</td> <td>rzyx</td>
<td>rzyx</td> <td>rzyx</td>
</tr> </tr>
<tr> <tr>
<td>x</td> <td>x</td>
<td>y</td> <td>y</td>
<td>z</td> <td>z</td>
<td>alpha_0</td> <td>alpha_0</td>
<td>alpha_1</td> <td>alpha_1</td>
<td>alpha_2</td> <td>alpha_2</td>
</tr> </tr>
</table> </table>
But you may have csv file with a different format. But you may have csv file with a different format.
Here we are going to show, how to convert your csv file format to the one required by navipy Here we are going to show, how to convert your csv file format to the one required by navipy
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Recording along a trajectory # Recording along a trajectory
In this tutorial, you will learn to take advantage of navipy and the rendering modules (blender) to render what has been seen by the animal along its trajectory. In this tutorial, you will learn to take advantage of navipy and the rendering modules (blender) to render what has been seen by the animal along its trajectory.
To render along a trajectory, you will need to have: To render along a trajectory, you will need to have:
* the environment (a blender file) * the environment (a blender file)
* the trajectory formatted for navipy (see 02-recording-animal-trajectory) * the trajectory formatted for navipy (see 02-recording-animal-trajectory)
## Ploting the position and orientation of the animal
The trajectory of the animal can be plotted in different manners. Here we will look at two representations:
* Timeseries
* Lollipop
### Timeseries
Plotting a trajectories along the time allows to visualize temporal dependencies between each component of the saccade. For example, flying insects engage in an active vision strategy containing saccades and intersaccades. By ploting the euler's angle along the time, one can observe a step like plot, i.e. a succesion of plateaus (interssacades) and quick change in the angle (saccade).
The positional and orientational components of the trajectories have different units (unit of distance and unit of angle, respectively). Therefore two different y-axis are necessary.
We first load a trajectory:
%% Cell type:code id: tags:
``` python
from navipy.trajectories import Trajectory
import pkg_resources
# Use the trafile from the resources
# You can adapt this code, by changing trajfile
# with your own trajectory file
trajfile = pkg_resources.resource_filename(
'navipy',
'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001.mat')
Trajectory().read_csv
```
%% Output
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-4fa54fb149fb> in <module>()
----> 1 from navipy.trajectories import Trajectory
2 import pkg_resources
3 # Use the trafile from the resources
4 # You can adapt this code, by changing trajfile
5 # with your own trajectory file
/data/data/com.termux/files/usr/lib/python3.6/site-packages/navipy-0.1-py3.6.egg/navipy/__init__.py in <module>()
38
39 """
---> 40 from navipy.database import DataBaseLoad
41 import logging
42
/data/data/com.termux/files/usr/lib/python3.6/site-packages/navipy-0.1-py3.6.egg/navipy/database/__init__.py in <module>()
4
5 import os
----> 6 import numpy as np
7 import pandas as pd
8 import sqlite3
ModuleNotFoundError: No module named 'numpy'
%% Cell type:code id: tags:
``` python
```
%% Cell type:markdown id: tags:
## Checking the position of the animal within the environment ## Checking the position of the animal within the environment
Rendering a trajectory will take time, and thus we want to check - prior to rendering - the correctness of the animal within the environment. The best way to check, is to overlay the trajectory within the blender world, and by rotating the environment, one could look for: part of the trajectory crossing objects, a wrong height, etc... Rendering a trajectory will take time, and thus we want to check - prior to rendering - the correctness of the animal within the environment. The best way to check, is to overlay the trajectory within the blender world, and by rotating the environment, one could look for: part of the trajectory crossing objects, a wrong height, etc...
To overlay the trajectory within your blender environment you can use the following command: To overlay the trajectory within your blender environment you can use the following command:
```bash ```bash
blendoverlaytraj --blenderworld='pathtomyworld.blend' blendoverlaytraj --blenderworld='pathtomyworld.blend'
--trajectory='pathtomytrajectory.csv' --trajectory='pathtomytrajectory.csv'
``` ```
here ```pathtomyworld.blend``` and ```'pathtomytrajectory.csv'``` are the path to your blender environment and your trajectory respectively. here ```pathtomyworld.blend``` and ```'pathtomytrajectory.csv'``` are the path to your blender environment and your trajectory respectively.
> The overlayed trajectory will not be rendered, because it is a simple line and its rendering is disable. If you want > The overlayed trajectory will not be rendered, because it is a simple line and its rendering is disable. If you want
> to render the trajectory, you can bevel the trajectory with a circle for example. It will create a extrude the circle > to render the trajectory, you can bevel the trajectory with a circle for example. It will create a extrude the circle
> along the trajectory and thus creating a 3D shaped. (Don't forget to enable rendering) > along the trajectory and thus creating a 3D shaped. (Don't forget to enable rendering)
## Rendering the trajectory in a database ## Rendering the trajectory in a database
Once we know that the trajectory is correctly placed within the environment, it is time to render along the trajectory. We, however, recommand to render only the first view frame of your trajectory first in order to check for the orientation. Indeed we only checked the position of the animal, but not its orientation. Navipy supports all 24 Euler convention, and also quaternion. You need to figure out which one you used. Sadly we did not find until now an easy way to do it... Having said that, if you "only" tracked the yaw of the animal, you can safely use the 'rzyx' convention, here alpha_0 of your trajectory correspond to the yaw of the animal and all other angles are set to 0. Once we know that the trajectory is correctly placed within the environment, it is time to render along the trajectory. We, however, recommand to render only the first view frame of your trajectory first in order to check for the orientation. Indeed we only checked the position of the animal, but not its orientation. Navipy supports all 24 Euler convention, and also quaternion. You need to figure out which one you used. Sadly we did not find until now an easy way to do it... Having said that, if you "only" tracked the yaw of the animal, you can safely use the 'rzyx' convention, here alpha_0 of your trajectory correspond to the yaw of the animal and all other angles are set to 0.
```bash ```bash
blendalongtraj --output-file='pathtodatabase.db' blendalongtraj --output-file='pathtodatabase.db'
--blenderworld='pathtomyworld.blend' --blenderworld='pathtomyworld.blend'
--trajectory='pathtomytrajectory.csv' --trajectory='pathtomytrajectory.csv'
``` ```
here ```pathtomyworld.blend```, ```'pathtomytrajectory.csv'```, ```pathtodatabase.db``` are the path to your blender environment, to your trajectory, and to the file to store the iamges respectively. here ```pathtomyworld.blend```, ```'pathtomytrajectory.csv'```, ```pathtodatabase.db``` are the path to your blender environment, to your trajectory, and to the file to store the iamges respectively.
%% Cell type:code id: tags:
``` python
```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Database to list of images ## Database to list of images
The database store all position-orientation and images along the trajectory in a single file. On the one hand, it is convenient, because you always know at which position and in which orientation an image has been rendered. On the other hand, we can not easily visualise the images. The database store all position-orientation and images along the trajectory in a single file. On the one hand, it is convenient, because you always know at which position and in which orientation an image has been rendered. On the other hand, we can not easily visualise the images.
To convert the database into an image sequence or list, you can run the following script in a ipython notebook (Don't forget to change the variable ```database``` to the path of your trajectory To convert the database into an image sequence or list, you can run the following script in a ipython notebook (Don't forget to change the variable ```database``` to the path of your trajectory
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Load the necessary modules # Load the necessary modules
from navipy.database import DataBaseLoad from navipy.database import DataBaseLoad
from matplotlib.image import imsave from matplotlib.image import imsave
import numpy as np import numpy as np
import os import os
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
# Load the database, and specify the # Load the database, and specify the
# the output directory to save the list # the output directory to save the list
# of images # of images
import pkg_resources import pkg_resources
# Use the trafile from the resources # Use the trafile from the resources
# You can adapt this code, by changing trajfile # You can adapt this code, by changing trajfile
# with your own trajectory file # with your own trajectory file
database = pkg_resources.resource_filename( database = pkg_resources.resource_filename(
'navipy', 'navipy',
'resources/database.db') 'resources/database.db')
database_dir, _ = os.path.splitext(database) database_dir, _ = os.path.splitext(database)
if not os.path.exists(database_dir): if not os.path.exists(database_dir):
os.makedirs(database_dir) os.makedirs(database_dir)
database_template = os.path.join(database_dir, 'frame_{}.png') database_template = os.path.join(database_dir, 'frame_{}.png')
mydb = DataBaseLoad(database) mydb = DataBaseLoad(database)
for rowid in mydb.posorients.index: for rowid in mydb.posorients.index:
my_scene = mydb.scene(rowid=rowid) my_scene = mydb.scene(rowid=rowid)
to_plot_im = my_scene[:, :, :3, 0] to_plot_im = my_scene[:, :, :3, 0]
to_plot_im -= to_plot_im.min() to_plot_im -= to_plot_im.min()
to_plot_im /= to_plot_im.max() to_plot_im /= to_plot_im.max()
to_plot_im = to_plot_im * 255 to_plot_im = to_plot_im * 255
to_plot_im = to_plot_im.astype(np.uint8) to_plot_im = to_plot_im.astype(np.uint8)
to_plot_dist = my_scene[:, :, 3, 0] to_plot_dist = my_scene[:, :, 3, 0]
imsave(database_template.format(rowid), to_plot_im[::-1,...] ) imsave(database_template.format(rowid), to_plot_im[::-1,...] )
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Plotting an image from the database ## Plotting an image from the database
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
%matplotlib inline %matplotlib inline
my_scene = mydb.scene(rowid=2) my_scene = mydb.scene(rowid=2)
f, axarr = plt.subplots(1, 2, figsize=(15, 4)) f, axarr = plt.subplots(1, 2, figsize=(15, 4))
to_plot_im = my_scene[:, :, :3, 0].astype(float) to_plot_im = my_scene[:, :, :3, 0].astype(float)
to_plot_im -= to_plot_im.min() to_plot_im -= to_plot_im.min()
to_plot_im /= to_plot_im.max() to_plot_im /= to_plot_im.max()
to_plot_im = to_plot_im * 255 to_plot_im = to_plot_im * 255
to_plot_im = to_plot_im.astype(np.uint8) to_plot_im = to_plot_im.astype(np.uint8)
to_plot_dist = my_scene[:, :, 3, 0] to_plot_dist = my_scene[:, :, 3, 0]
ax = axarr[0] ax = axarr[0]
ax.imshow(to_plot_im) ax.imshow(to_plot_im)
ax.invert_yaxis() ax.invert_yaxis()
ax = axarr[1] ax = axarr[1]
ax.imshow(to_plot_dist) ax.imshow(to_plot_dist)
ax.invert_yaxis() ax.invert_yaxis()
f.show() f.show()
``` ```
%% Output %% Output
/home/bolirev/.virtualenv/toolbox-navigation/lib/python3.6/site-packages/matplotlib-2.1.0-py3.6-linux-x86_64.egg/matplotlib/figure.py:418: UserWarning: matplotlib is currently using a non-GUI backend, so cannot show the figure /home/bolirev/.virtualenv/toolbox-navigation/lib/python3.6/site-packages/matplotlib-2.1.0-py3.6-linux-x86_64.egg/matplotlib/figure.py:418: UserWarning: matplotlib is currently using a non-GUI backend, so cannot show the figure
"matplotlib is currently using a non-GUI backend, " "matplotlib is currently using a non-GUI backend, "
......
...@@ -8,11 +8,11 @@ import inspect ...@@ -8,11 +8,11 @@ import inspect
import pkg_resources import pkg_resources
import tempfile import tempfile
# Following need to be imported in blender as well # Following need to be imported in blender as well
from navipy.tools.trajectory import Trajectory from navipy.trajectories import Trajectory
importwithinblender = [ importwithinblender = [
'from navipy.tools.trajectory import Trajectory'] 'from navipy.trajectories import Trajectory']
def parser_blend_overlaytraj(): def parser_blend_overlaytraj():
......
...@@ -507,7 +507,7 @@ class Trajectory(pd.DataFrame): ...@@ -507,7 +507,7 @@ class Trajectory(pd.DataFrame):
colors=None, step_lollipop=1, colors=None, step_lollipop=1,
offset_lollipop=0, lollipop_marker='o', offset_lollipop=0, lollipop_marker='o',
linewidth=1, lollipop_tail_width=1, linewidth=1, lollipop_tail_width=1,
lollipop_head_size=1 lollipop_head_size=1, stickdir='backward'
): ):
""" lollipops plot """ lollipops plot
...@@ -534,6 +534,7 @@ class Trajectory(pd.DataFrame): ...@@ -534,6 +534,7 @@ class Trajectory(pd.DataFrame):
:param linewidth: :param linewidth:
:param lollipop_tail_width: :param lollipop_tail_width:
:param lollipop_head_size: :param lollipop_head_size:
:param stickdir: The direction of the stick of the animal (backward or forward)
""" """
direction = self.facing_direction() direction = self.facing_direction()
if colors is None: if colors is None:
...@@ -549,7 +550,10 @@ class Trajectory(pd.DataFrame): ...@@ -549,7 +550,10 @@ class Trajectory(pd.DataFrame):
index=pd.MultiIndex.from_product( index=pd.MultiIndex.from_product(
[[0], [[0],
['x', 'y', 'z']])) ['x', 'y', 'z']]))
tailmarker.loc[0, 'x'] = 1 if stickdir == 'forward':
tailmarker.loc[0, 'x'] = -1
else:
tailmarker.loc[0, 'x'] = 1
tail = self.world2body(tailmarker) tail = self.world2body(tailmarker)
tail = tail.loc[:, 0] tail = tail.loc[:, 0]
# Plot the agent trajectory # Plot the agent trajectory
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment