Skip to content
Snippets Groups Projects
Commit ea2bba87 authored by Olivier Bertrand's avatar Olivier Bertrand
Browse files

Update doc

parent a9734938
No related branches found
No related tags found
No related merge requests found
...@@ -26,7 +26,7 @@ Content ...@@ -26,7 +26,7 @@ Content
tutorials/index tutorials/index
references/index references/index
tests/index tests/index
Indices and tables Indices and tables
================== ==================
......
...@@ -9,6 +9,6 @@ Tests ...@@ -9,6 +9,6 @@ Tests
sensors sensors
processing processing
comparing comparing
moving
database database
maths maths
arenatools
Moving
======
Agents
~~~~~~
.. automodule:: navipy.moving.test_agent
:members:
Maths
~~~~~
.. automodule:: navipy.moving.test_maths
:members:
How to generate a database using blender
----------------------------------------
.. literalinclude:: examples/blenddemo_beesampling.py
:lines: 7, 8
First we configure the rendering module
.. literalinclude:: examples/blenddemo_beesampling.py
:lines: 11, 12
With the toolbox at disposition we just need to configure the \
BeeSampling to render images on a regular 3D grid.
.. literalinclude:: examples/blenddemo_beesampling.py
:lines: 13,16-23
If we want to use the distance to objects, we need to tell the \
BeeSampling what is the maximum distance to objects in the environment.\
Otherwise the distance can go until infinity, and since the image are \
compressed in the database, all distances to object will be equal to \
zero:
.. literalinclude:: examples/blenddemo_beesampling.py
:lines: 27-28
Finally we can generate the database.
.. literalinclude:: examples/blenddemo_beesampling.py
:dedent: 4
:lines: 31-32
(:download:`Source code <examples/blenddemo_beesampling.py>`)
(:download:`Blender world <../../../navipy/resources/forest_world.blend>`)
%% Cell type:markdown id: tags:
# Using the Optic Flow function
This Tutorial explains how to use the optic flow function of the navipy toolbox.
The function computes the optic for a positions with an already rendered scene.
Therefore if the optic flow of a whole trajectory is wanted, this function has to be called in a loop for each step.
So to get the optic flow we need:
* the viewing direction (in radians)
* the already rendered scene
* velocity of the bee including the accaleration
The constructor of the optic_flow function is the following:
%% Cell type:code id: tags:
``` python
optic_flow(scene, viewing_directions, velocity, distance_channel=3)
```
%% Output
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-1-b9f43b3f8149> in <module>()
----> 1 optic_flow(scene, viewing_directions, velocity, distance_channel=3)
NameError: name 'optic_flow' is not defined
%% Cell type:markdown id: tags:
* scene: ibpc
* param viewing_directions: viewing direction of each pixel
(azimuth,elevation) in radians
* param velocity: pandas series
(x,y,z,alpha,beta,gamma,dx,dy,dz,dalpha,dbeta,dgamma)
* distance channel: distance; possible choices= 0,1,2,3; default :3
%% Cell type:markdown id: tags:
The scenes must be of h x w 4 x 1 where h is the hight of image and w the width. The third dimension is needed for the different channels. We have R, G, B, D. Where R is the red, G is the green, B is the blue and D is the distance channel. Here is an example for how to initialize a random scene with 180 x 180 dimension.
For an example on how to get a rendered scene please refer to the other tutorials.
%% Cell type:code id: tags:
``` python
import numpy as np
scene = np.random.random((180, 360, 4, 1))
```
%% Cell type:markdown id: tags:
To set up the viewing direction with a 180 x 180 scene, the following can be done:
%% Cell type:code id: tags:
``` python
elevation = np.arange(-np.pi/2, np.pi/2, 2*(np.pi/360))
azimuth = np.arange(0, 2*np.pi, 2*(np.pi/360))
viewing_directions = np.zeros((180, 2))
viewing_directions[:, 0] = elevation
viewing_directions[:, 1] = azimuth[0:180]
```
%% Cell type:markdown id: tags:
The velocity contains the velocity of the current position as well as the acceleration (difference of velocity to the old position). The velocity must be a pandas dataframe with a multiindex, where the angles index must contain the convention to be used.
The conventions that can be used are:
'sxyz', 'sxyx', 'sxzy','sxzx', 'syzx', 'syzy',
'syxz', 'syxy', 'szxy', 'szxz', 'szyx', 'szyz',
'rzyx', 'rxyx', 'ryzx', 'rxzx', 'rxzy', 'ryzy',
'rzxy', 'ryxy', 'ryxz', 'rzxz', 'rxyz', 'rzyz'
Here is an example on how to initilize it. In this example the bee is rotating around the x axis (yaw) and stays constant otherwise.
%% Cell type:code id: tags:
``` python
import pandas as pd
# only the yaw is changes aka the bee is tilted up
positions = np.array([[-20, -20, 2.6, 1.57079633, 0, 0],
[-20, -20, 2.6, 1.90079633, 0, 0]])
x = positions[0, 0]
y = positions[0, 1]
z = positions[0, 2]
yaw = positions[0, 3]
pitch = positions[0, 4]
roll = positions[0, 5]
dx = positions[1, 0]-positions[0, 0]
dy = positions[1, 1]-positions[0, 1]
dz = positions[1, 2]-positions[0, 2]
dyaw = positions[1, 3]-positions[0, 3]
dpitch = positions[1, 4]-positions[0, 4]
droll = positions[1, 5]-positions[0, 5]
tuples = [('location', 'x'), ('location', 'y'),
('location', 'z'), ('location', 'dx'),
('location', 'dy'), ('location', 'dz'),
('rxyz', 'alpha_0'), ('rxyz', 'alpha_1'),
('rxyz', 'alpha_2'), ('rxyz', 'dalpha_0'),
('rxyz', 'dalpha_1'), ('rxyz', 'dalpha_2')]
index = pd.MultiIndex.from_tuples(tuples,
names=['position', 'orientation'])
velocity = pd.Series(index=index)
velocity['location']['x'] = x
velocity['location']['y'] = y
velocity['location']['z'] = z
velocity['rxyz']['alpha_0'] = yaw
velocity['rxyz']['alpha_1'] = pitch
velocity['rxyz']['alpha_2'] = roll
velocity['location']['dx'] = dx
velocity['location']['dy'] = dy
velocity['location']['dz'] = dz
velocity['rxyz']['dalpha_0'] = dyaw
velocity['rxyz']['dalpha_1'] = dpitch
velocity['rxyz']['dalpha_2'] = droll
```
%% Cell type:markdown id: tags:
So then the optic flow function can be called as follows:
%% Cell type:code id: tags:
``` python
from navipy.processing import mcode
rof, hof, vof = mcode.optic_flow(scene, viewing_directions,
velocity)
```
%% Cell type:markdown id: tags:
Here is an example that loads a scene from the database and calculates the horizontal, vertical and rotational optic flow
%% Cell type:code id: tags:
``` python
# Load the necessary modules
from navipy.processing import mcode
from navipy.database import DataBaseLoad
from matplotlib.image import imsave
import numpy as np
import os
import matplotlib.pyplot as plt
# Load the database, and specify the
# the output directory to save the list
# of images
import pkg_resources
%matplotlib inline
database = pkg_resources.resource_filename(
'navipy',
'resources/database.db')
mydb = DataBaseLoad(database)
my_scene = mydb.scene(rowid=2)
rof, hof, vof = mcode.optic_flow(my_scene, viewing_directions,
velocity)
f, axarr = plt.subplots(2, 2, figsize=(15, 4))
plt.subplot(221)
to_plot_im = my_scene[:, :, :3, 0].astype(float)
to_plot_im = my_scene[:, :, :3, 0]
to_plot_im -= to_plot_im.min()
to_plot_im /= to_plot_im.max()
to_plot_im = to_plot_im * 255
to_plot_im = to_plot_im.astype(np.uint8)
to_plot_dist = my_scene[:, :, 3, 0]
plt.imshow(to_plot_im)
plt.subplot(222)
plt.imshow(hof, cmap='gray')
plt.subplot(223)
plt.imshow(vof, cmap='gray')
plt.subplot(224)
plt.imshow(rof, cmap='gray')
```
%% Output
<matplotlib.image.AxesImage at 0x7fa2a919fe48>
%% Cell type:markdown id: tags:
# Example with a trajectory
%% Cell type:code id: tags:
``` python
from scipy.io import loadmat
import numpy as np
import os
from navipy.trajectories import Trajectory
import pkg_resources
# Use the trafile from the resources
# You can adapt this code, by changing trajfile
# with your own trajectory file
trajfile = pkg_resources.resource_filename(
'navipy',
'resources/sample_experiment/Lobecke_JEB_2018/Y101_OBFlight_0001.mat')
csvtrajfile, _ = os.path.splitext(trajfile)
csvtrajfile = csvtrajfile+'.csv'
mymat = loadmat(trajfile)
# matlab files are loaded in a dictionary
# we need to identify, under which key the trajectory has been saved
print(mymat.keys())
key = 'trajectory'
# Arrays are placed in a tupple. We need to access the level
# of the array itself.
mymat = mymat[key][0][0][0]
# The array should be a numpy array, and therefore as the .shape function
# to display the array size:
print(mymat.shape)
# In this example the array has 7661 rows and 4 columns
# the columns are: x,y,z, yaw. Therefore pitch and roll were assumed to be constant (here null)
# We can therefore init a trajectory with the convention for rotation
# often yaw-pitch-roll (i.e. 'rzyx') and with indeces the number of sampling points we have
# in the trajectory
rotconvention = 'rzyx'
indeces = np.arange(0,mymat.shape[0])
mytraj = Trajectory(rotconv = rotconvention, indeces=indeces)
# We now can assign the values
mytraj.x = mymat[:,0]
mytraj.y = mymat[:,1]
mytraj.z = mymat[:,2]
mytraj.alpha_0 = mymat[:,3]
mytraj.alpha_1 = 0
mytraj.alpha_2 = 0
# We can then save mytraj as csv file for example
mytraj.to_csv(csvtrajfile)
mytraj.head()
```
%% Output
dict_keys(['__globals__', '__version__', 'trajectory', 'cylinders', 'nests', '__header__'])
(7661, 4)
conv rzyx
location rzyx
x y z alpha_0 alpha_1 alpha_2
0 3.056519 -214.990482 9.330593 2.79751 0 0
1 4.611665 -215.020314 8.424138 2.80863 0 0
2 4.556650 -214.593236 9.185016 2.81407 0 0
3 4.643091 -213.829769 10.542035 2.82704 0 0
4 4.647302 -214.431592 7.461187 2.82896 0 0
%% Cell type:markdown id: tags:
Then we need the velocity to calculate the optic flow
%% Cell type:code id: tags:
``` python
vel = mytraj.velocity()
vel.head()
```
%% Output
dx dy dz dalpha_0 dalpha_1 dalpha_2
0 NaN NaN NaN NaN NaN NaN
1 1.555146 -0.029831 -0.906455 0.01112 0.0 0.0
2 -0.055015 0.427078 0.760878 0.00544 0.0 0.0
3 0.086441 0.763467 1.357019 0.01297 0.0 0.0
4 0.004211 -0.601823 -3.080847 0.00192 0.0 0.0
%% Cell type:markdown id: tags:
Iterate through the trajectory to get the optic flow of each scene
%% Cell type:code id: tags:
``` python
tuples = [('location', 'x'), ('location', 'y'),
('location', 'z'), ('location', 'dx'),
('location', 'dy'), ('location', 'dz'),
('rxyz', 'alpha_0'), ('rxyz', 'alpha_1'),
('rxyz', 'alpha_2'), ('rxyz', 'dalpha_0'),
('rxyz', 'dalpha_1'), ('rxyz', 'dalpha_2')]
index = pd.MultiIndex.from_tuples(tuples,
names=['position', 'orientation'])
for i in range(1,10):
velocity = pd.Series(index=index)
velocity['location']['x'] = mytraj.x[i]
velocity['location']['y'] = mytraj.y[i]
velocity['location']['z'] = mytraj.z[i]
velocity['rxyz']['alpha_0'] = mytraj.alpha_0[i]
velocity['rxyz']['alpha_1'] = mytraj.alpha_1[i]
velocity['rxyz']['alpha_2'] = mytraj.alpha_2[i]
velocity['location']['dx'] = vel.dx[i]
velocity['location']['dy'] = vel.dy[i]
velocity['location']['dz'] = vel.dz[i]
velocity['rxyz']['dalpha_0'] = vel.dalpha_0[i]
velocity['rxyz']['dalpha_1'] = vel.dalpha_1[i]
velocity['rxyz']['dalpha_2'] = vel.dalpha_2[i]
my_scene = mydb.scene(rowid=i)
rof, hof, vof = mcode.optic_flow(my_scene, viewing_directions,
velocity)
```
%% Cell type:code id: tags:
``` python
```
Render a single image
---------------------
.. literalinclude:: examples/blenddemo_cyberbee.py
:lines: 5
With the toolbox at disposition we just need to configure the \
Cyberbee to render images at desired positions.
.. literalinclude:: examples/blenddemo_cyberbee.py
:lines: 8-13
To render a scene at a given positions, we assign position orientation \
to and then tell the cyberbee to get the scene at that point.
.. literalinclude:: examples/blenddemo_cyberbee.py
:lines: 16-24
(:download:`Source code <examples/blenddemo_cyberbee.py>`)
(:download:`Blender world <../../../navipy/resources/forest_world.blend>`)
Create a movie
--------------
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment