Skip to content
Snippets Groups Projects
Commit c72ee648 authored by Olivier Bertrand's avatar Olivier Bertrand
Browse files

Merge branch 'develop' into Luise

parents 9d3eaba1 9621a84c
No related branches found
No related tags found
No related merge requests found
Showing
with 24206 additions and 7934 deletions
# .coveragerc to control coverage.py
[run]
omit =
# Omit all test_ functions
**/test_*
# Omit blender functions / can be only tested with blender files
**/blend*
source = navipy
[html]
directory = coverage_html_report
\ No newline at end of file
# This file is a template, and might need editing before it works on your project.
# This file is a template, and might need editing before it works on your project.
image: python:latest
before_script:
- python3 -V # Print out python version for debugging
stages:
- build
- test
- deploy
- coverage
- doc_build
navipy_install:
stage: build
before_script:
- python3 -V # Print out python version for debugging
script:
- python3 setup.py bdist_wheel
- pip3 install virtualenv
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3 install .
- python navipy/scripts/config_matplotlib_server.py # because no interactive backend
artifacts:
paths:
- venv
expire_in: 2 hours
artifacts:
paths:
- venv
flake8:
stage: test
dependencies:
- navipy_install
script:
- pip3 install --user tox flake8
- source $CI_PROJECT_DIR/venv/bin/activate
- python3 -m flake8 navipy/*.py
navipy_unittest:
stage: test
dependencies:
- navipy_install
script:
- pip3 install --user networkx
- python3 -m unittest discover navipy
- source $CI_PROJECT_DIR/venv/bin/activate
- python3 -m coverage run -m unittest discover navipy
artifacts:
paths:
- .coverage
expire_in: 1 hours
coverage:
stage: coverage
dependencies:
- navipy_install
- navipy_unittest
script:
- source $CI_PROJECT_DIR/venv/bin/activate
- python3 -m coverage html
artifacts:
paths:
- coverage_html_report
expire_in: 1 day
doc_build:
stage: deploy
stage: doc_build
dependencies:
- navipy_install
- coverage
script:
- pip3 install --user sphinx sphinx-rtd-theme
- source $CI_PROJECT_DIR/venv/bin/activate
- cd doc ; make html
- mv build/html/ ../public/
artifacts:
paths:
- public
only:
- master
- doc/build
expire_in: 1 week
\ No newline at end of file
......@@ -11,17 +11,67 @@ The navigation toolbox aims to bring in an intuitive python toolbox different me
- Avoid re-rendering by using grid constrained motion.
# How to install the navigation toolbox
The rendering from the insect point of view is done with the blender rendering engine. Thus, you will need to first install Blender
https://www.blender.org/
## Windows
We recommend using Anaconda (https://www.anaconda.com/) and create a virtual environment within it before installing the toolbox.
Start the Anaconda Prompt, and then enter
```
conda update conda
```
Upadate any packages if necessary by typing y to proceed
Then, create a virtual environment for your project
```
conda create -n yourenvname python=x.x anaconda
```
here `yourenvname` is the name of your project (without special characters and spaces), and `python=x.x` the version of python
to use, for example `python=3.5.3` (see table below to install a version matching with blender)
You can now activate your environment.
```
activate yourenvname
```
and install navipy by going the root folder of the toolbox (use cd to change directory and dir to list the content of a directory), and typing
```
python setup.py install
```
# Code of conduct
You can now use the navigation toolbox.
In the interest of fostering an open and welcoming environment, we as students and teachers pledge to making participation in this class a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Linux (Ubuntu)
From the terminal
```
pip install update
```
Upadate any packages if necessary by typing y to proceed
# Todo:
Then, create a virtual environment for your project
```
mkvirtualenv yourenvname
```
here `yourenvname` is the name of your project (without special characters and spaces)
You can now activate your environment.
```
workon yourenvname
```
and install navipy by going the root folder of the toolbox (use cd to change directory and ls to list the content of a directory), and typing
```
python setup.py install
```
## Blender-python version
| Blender version | Python version |
| --------------- | -------------- |
| 2.79b | 3.5.3 |
# Code of conduct
1. solve problems with CI/CD pipelines for contineous integration
2. add tutorials
3. comment test function -> add to doc
\ No newline at end of file
In the interest of fostering an open and welcoming environment, we as users and developers pledge to making participation with this project a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
\ No newline at end of file
This diff is collapsed.
......@@ -525,7 +525,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
"version": "3.6.3"
}
},
"nbformat": 4,
......
Source diff could not be displayed: it is too large. Options to address this: view the blob.
"""
Function for camera calibrations
"""
import pandas as pd
import numpy as np
import cv2
import os
from navipy.io import ivfile
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa F401
def concatenate_manhattan_2d3d(manhattan_3d, manhattan_2d):
""" Join 3D and 2D points for pose estimation """
if not isinstance(manhattan_2d, pd.DataFrame):
raise TypeError('manhattan_2d should be a DataFrame')
if not isinstance(manhattan_3d, pd.DataFrame):
raise TypeError('manhattan_3d should be a DataFrame')
d = {'three_d': manhattan_3d, 'two_d': manhattan_2d}
manhattan_3d2d = pd.concat(d.values(), axis=1, keys=d.keys())
manhattan_3d2d = manhattan_3d2d.astype(float)
return manhattan_3d2d
def objects_points_from_manhattan(manhattan_3d2d):
"""return objects points from a join manhatta
.. seealso: concatenate_manhattan_2d3d
"""
if not isinstance(manhattan_3d2d, pd.DataFrame):
raise TypeError('manhattan_3d2d should be a DataFrame')
return manhattan_3d2d.dropna().loc[:, 'three_d']\
.loc[:, ['x', 'y', 'z']].values
def image_points_from_manhattan(manhattan_3d2d):
"""return image points from a joined manhattan
.. seelso: concatenate_manhattan_2d3d
"""
if not isinstance(manhattan_3d2d, pd.DataFrame):
raise TypeError('manhattan_3d2d should be a DataFrame')
return manhattan_3d2d.dropna().loc[:, 'two_d'].loc[:, ['x', 'y']].values
def compose_from_vectors(rvec, tvec):
"""return camera pose from rotation and translation vectors
"""
r = cv2.Rodrigues(rvec)[0]
camera_pose = np.hstack((r, tvec))
camera_pose = np.vstack((camera_pose, [0, 0, 0, 1]))
return camera_pose
def camera_pose(manhattan_2d, manhattan_3d, camera_intrinsics):
"""solvePnP with 3D-2D points correspondance.
The camera intrinsics have to be known appriori
"""
camera_matrix = camera_intrinsics['intrinsic_matrix']
distcoeffs = camera_intrinsics['distortion']
manhattan_2d3d = concatenate_manhattan_2d3d(manhattan_3d, manhattan_2d)
objects_points = objects_points_from_manhattan(manhattan_2d3d)
image_points = image_points_from_manhattan(manhattan_2d3d)
retval, rvec, tvec = cv2.solvePnP(
objects_points, image_points, camera_matrix, distcoeffs)
return compose_from_vectors(rvec, tvec)
def calibrates_extrinsic(filenames,
manhattan_3d,
cameras_intrinsics,
corner_th=-1,
manhattan_3d_key='manhattan'):
"""Calibrate from files.
:param filenames: a list of ivfile name with manhattan markers
:param file_manhattan_3d: hdf file with manhattan_3d_key to load \
containing a pandas dataframe with x,y,z coordinates of each points.
:param cameras_intrinsics: a list of camera intrinsics
:param corner_th: a threshold for points to be ignored on the top\
right image corner (useful when certain points are not visible)
:param manhattan_3d_key: the key to load the manhattan_3d \
(default: manhattan)
"""
cameras_extrinsics = dict()
for cam_i, cfile in filenames.items():
_, ext = os.path.splitext(cfile)
if ext == '.tra':
manhattan_2d = ivfile.manhattan(cfile, corner_th)
elif ext == '.csv':
manhattan_2d = pd.read_csv(cfile,
names=['x', 'y'],
header=0)
idx = (manhattan_2d.x < corner_th) & (manhattan_2d.y < corner_th)
manhattan_2d.x[idx] = np.nan
manhattan_2d.y[idx] = np.nan
else:
msg = 'FileType not understood (only .tra, .csv are supported)'
raise ValueError(msg)
cameras_extrinsics[cam_i] = dict()
cameras_extrinsics[cam_i]['pose'] = camera_pose(
manhattan_2d, manhattan_3d, cameras_intrinsics[cam_i])
return cameras_extrinsics
def plot_manhattan(manhattan_3d, figsize=(15, 15), unit=None):
"""Plot a manhattan dataframe
:param manhattan_3d: x,y,z coordinates of each tower
:param figsize: figure size
:returns: figure and axis handles
"""
if unit is None:
try:
unit = manhattan.unit
except AttributeError as e:
pass
if unit is None:
unit = 'no-unit'
fig = plt.figure(figsize=figsize)
ax0 = fig.add_subplot(221)
ax1 = fig.add_subplot(223)
ax3 = fig.add_subplot(224)
for ax, col_i, col_j in zip([ax1, ax0, ax3],
['x', 'x', 'z'],
['y', 'z', 'y']):
ax.plot(manhattan_3d.loc[:, col_i], manhattan_3d.loc[:, col_j], 'ko')
ax.set_xlabel('{} [{}]'.format(col_i, unit))
ax.set_ylabel('{} [{}]'.format(col_j, unit))
ax.axis('equal')
# Annotate
for row_i, row_val in manhattan_3d.iterrows():
shift = [10, 10]
xy = row_val.loc[['x', 'y']]
xytext = xy - 2 * ((xy > 0) - 0.5) * shift
ax1.annotate('{}'.format(row_i), xy, xytext)
ax4 = fig.add_subplot(222, projection='3d')
for i, row in manhattan_3d.iterrows():
ax4.plot([row.x, row.x],
[row.y, row.y],
[0, row.z], 'k-.')
ax4.set_xlabel('{} [{}]'.format('x', unit))
ax4.set_ylabel('{} [{}]'.format('y', unit))
ax4.set_zlabel('{} [{}]'.format('z', unit))
return fig, [ax0, ax1, ax3, ax4]
import pandas as pd
import numpy as np
def dlt_reconstruct(coeff, campts, z=None):
"""
This function reconstructs the 3D position of a coordinate based on a set
of DLT coefficients and [u,v] pixel coordinates from 2 or more cameras
:param coeff: - 11 DLT coefficients for all n cameras, [11,n] array
:param campts: - [u,v] pixel coordinates from all n cameras over f frames
:param z: the z coordinate of all points for reconstruction \
from a single camera
:returns: xyz - the xyz location in each frame, an [f,3] array\
rmse - the root mean square error for each xyz point, and [f,1] array,\
units are [u,v] i.e. camera coordinates or pixels
"""
# number of cameras
ncams = campts.columns.levels[0].shape[0]
if (ncams == 1) and (z is None):
raise NameError('reconstruction from a single camera require z')
# setup output variables
xyz = pd.DataFrame(index=campts.index, columns=['x', 'y', 'z'])
rmse = pd.Series(index=campts.index)
# process each frame
for ii, frame_i in enumerate(campts.index):
# get a list of cameras with non-NaN [u,v]
row = campts.loc[frame_i, :].unstack()
validcam = row.dropna(how='any')
cdx = validcam.index
if validcam.shape[0] >= 2:
# Two or more cameras
u = campts.loc[frame_i, cdx].u
v = campts.loc[frame_i, cdx].v
# initialize least-square solution matrices
m1 = np.zeros((cdx.shape[0]*2, 3))
m2 = np.zeros((cdx.shape[0]*2, 1))
m1[0: cdx.size*2: 2, 0] = u*coeff.loc[8, cdx]-coeff.loc[0, cdx]
m1[0: cdx.size*2: 2, 1] = u*coeff.loc[9, cdx]-coeff.loc[1, cdx]
m1[0: cdx.size*2: 2, 2] = u*coeff.loc[10, cdx]-coeff.loc[2, cdx]
m1[1: cdx.size*2: 2, 0] = v*coeff.loc[8, cdx]-coeff.loc[4, cdx]
m1[1: cdx.size*2: 2, 1] = v*coeff.loc[9, cdx]-coeff.loc[5, cdx]
m1[1: cdx.size*2: 2, 2] = v*coeff.loc[10, cdx]-coeff.loc[7, cdx]
m2[0: cdx.size*2: 2, 0] = coeff.loc[3, cdx]-u
m2[1: cdx.size*2: 2, 0] = coeff.loc[7, cdx]-v
# get the least squares solution to the reconstruction
xyz.loc[frame_i, ['x', 'y', 'z']] = \
np.linalg.lstsq(m1, m2, rcond=None)[0]
# compute ideal [u,v] for each camera
uv = m1.dot(xyz.loc[frame_i, ['x', 'y', 'z']].transpose())
# compute the number of degrees of freedom in the reconstruction
dof = m2.size-3
# estimate the root mean square reconstruction error
rmse.loc[frame_i] = (np.sum((m2-uv) ** 2)/dof) ^ 0.5
elif (validcam.shape[0] == 1) and (z is not None):
# http://www.kwon3d.com/theory/dlt/dlt.html
# equation 19 with z = constant
# the term with z can be move to right side
# then equation 21 can be solved as follow:
u = campts.loc[frame_i, cdx].unstack().u
v = campts.loc[frame_i, cdx].unstack().v
# initialize least-square solution matrices
m1 = np.zeros((cdx.shape[0]*2, 2))
m2 = np.zeros((cdx.shape[0]*2, 1))
m1[0: cdx.size*2: 2, 0] = u*coeff.loc[8, cdx]-coeff.loc[0, cdx]
m1[0: cdx.size*2: 2, 1] = u*coeff.loc[9, cdx]-coeff.loc[1, cdx]
m1[1: cdx.size*2: 2, 0] = v*coeff.loc[8, cdx]-coeff.loc[4, cdx]
m1[1: cdx.size*2: 2, 1] = v*coeff.loc[9, cdx]-coeff.loc[5, cdx]
m2[0: cdx.size*2: 2, 0] = coeff.loc[3, cdx]-u
m2[1: cdx.size*2: 2, 0] = coeff.loc[7, cdx]-v
m2[0: cdx.size*2: 2, 0] -= \
(u*coeff.loc[10, cdx] - coeff.loc[2, cdx])*z.loc[frame_i]
m2[1: cdx.size*2: 2, 0] -= \
(v*coeff.loc[10, cdx] - coeff.loc[6, cdx])*z.loc[frame_i]
# get the least squares solution to the reconstruction
xyz.loc[frame_i, ['x', 'y']] = \
np.squeeze(np.linalg.lstsq(m1, m2, rcond=None)[0])
xyz.loc[frame_i, 'z'] = z.loc[frame_i]
# compute ideal [u,v] for each camera
uv = m1.dot(xyz.loc[frame_i, ['x', 'y']].transpose())
# compute the number of degrees of freedom in the reconstruction
dof = m2.size-3
# estimate the root mean square reconstruction error
rmse.loc[frame_i] = (np.sum((m2-uv) ** 2)/dof) ** 0.5
return xyz, rmse
def dlt_inverse(coeff, frames):
"""
This function reconstructs the pixel coordinates of a 3D coordinate as
seen by the camera specificed by DLT coefficients c
:param coeff: - 11 DLT coefficients for the camera, [11,1] array
:param frames: - [x,y,z] coordinates over f frames,[f,3] array
:returns: uv - pixel coordinates in each frame, [f,2] array
"""
# write the matrix solution out longhand for vector operation over
# all pointsat once
uv = np.zeros((frames.shape[0], 2))
frames = frames.loc[:, ['x', 'y', 'z']].values
normalisation = frames[:, 0]*coeff[8] + \
frames[:, 1]*coeff[9]+frames[:, 2]*coeff[10] + 1
uv[:, 0] = frames[:, 0]*coeff[0]+frames[:, 1] * \
coeff[1]+frames[:, 2]*coeff[2]+coeff[3]
uv[:, 1] = frames[:, 0]*coeff[4]+frames[:, 1] * \
coeff[5]+frames[:, 2]*coeff[6]+coeff[7]
uv[:, 0] /= normalisation
uv[:, 1] /= normalisation
return uv
def dlt_compute_coeffs(frames, campts):
"""
A basic implementation of 11 parameters DLT
: param frames: an array of x, y, z calibration point coordinates
: param campts: an array of u, v pixel coordinates from the camera
: returns: dlt coefficients and root mean square error
Notes: frame and camera points must have the same number of rows and at \
least contains six rows. Also the frame points must not all lie within a \
single plane.
"""
# remove NaNs
valid_idx = frames.dropna(how='any').index
valid_idx = campts.loc[valid_idx, :].dropna(how='any').index
# valid df
vframes = frames.loc[valid_idx, :]
vcampts = campts.loc[valid_idx, :]
# re arange the frame matrix to facilitate the linear least
# sqaures solution
matrix = np.zeros((vframes.shape[0]*2, 11)) # 11 for the dlt
for num_i, index_i in enumerate(vframes.index):
matrix[2*num_i, 0:3] = vframes.loc[index_i, ['x', 'y', 'z']]
matrix[2*num_i+1, 4:7] = vframes.loc[index_i, ['x', 'y', 'z']]
matrix[2*num_i, 3] = 1
matrix[2*num_i+1, 7] = 1
matrix[2*num_i, 8:11] = \
vframes.loc[index_i, ['x', 'y', 'z']]*(-vcampts.loc[index_i, 'u'])
matrix[2*num_i+1, 8:11] = \
vframes.loc[index_i, ['x', 'y', 'z']]*(-vcampts.loc[index_i, 'v'])
# re argen the campts array for the linear solution
vcampts_f = np.reshape(np.flipud(np.rot90(vcampts)), vcampts.size, 1)
print(vcampts_f.shape)
print(matrix.shape)
# get the linear solution the 11 parameters
coeff = np.linalg.lstsq(matrix, vcampts_f, rcond=None)[0]
# compute the position of the frame in u,v coordinates given the linear
# solution from the previous line
matrix_uv = dlt_inverse(coeff, vframes)
# compute the rmse between the ideal frame u,v and the
# recorded frame u,v
rmse = np.sqrt(np.mean(np.sum((matrix_uv-vcampts)**2)))
return coeff, rmse
"""
Function for triangulation of points projected on cameras
"""
import cv2
import numpy as np
from scipy.sparse.linalg import svds
from functools import partial
def emsvd(Y, k=None, tol=1E-3, maxiter=None):
"""
Approximate SVD on data with missing values via expectation-maximization
:param Y: (nobs, ndim) data matrix, missing values denoted by NaN/Inf
:param k: number of singular values/vectors to find (default: k=ndim)
:param tol: convergence tolerance on change in trace norm
:param maxiter: maximum number of EM steps to perform (default: no limit)
:returns: Y_hat, mu_hat, U, s, Vt
Y_hat: (nobs, ndim) reconstructed data matrix
mu_hat: (ndim,) estimated column means for reconstructed data
U, s, Vt: singular values and vectors (see np.linalg.svd and
scipy.sparse.linalg.svds for details)
Methods for large scale SVD with missing values
Miklós Kurucz, András A. Benczúr, Károly Csalogány, 2007
"""
if k is None:
svdmethod = partial(np.linalg.svd, full_matrices=False)
else:
svdmethod = partial(svds, k=k)
if maxiter is None:
maxiter = np.inf
# initialize the missing values to their respective column means
mu_hat = np.nanmean(Y, axis=0, keepdims=1)
valid = np.isfinite(Y)
Y_hat = np.where(valid, Y, mu_hat)
halt = False
ii = 1
v_prev = 0
while not halt:
# SVD on filled-in data
U, s, Vt = svdmethod(Y_hat - mu_hat)
# impute missing values
Y_hat[~valid] = (U.dot(np.diag(s)).dot(Vt) + mu_hat)[~valid]
# update bias parameter
mu_hat = Y_hat.mean(axis=0, keepdims=1)
# test convergence using relative change in trace norm
v = s.sum()
if ii >= maxiter or ((v - v_prev) / v_prev) < tol:
halt = True
ii += 1
v_prev = v
return Y_hat, mu_hat, U, s, Vt
def projects_points(pts_3d, cameras_calib):
ncameras = len(cameras_calib)
pts_cam = list()
for cam_i in range(ncameras):
cam_pose = cameras_calib[cam_i]['pose']
cam_mat = cameras_calib[cam_i]['intrinsic_matrix']
cam_dist = cameras_calib[cam_i]['distortion']
rvec, jacobian = cv2.Rodrigues(cam_pose[:3, :3])
tvec = cam_pose[:3, 3]
impoints, jacobian = cv2.projectPoints(
pts_3d, rvec, tvec, cam_mat, cam_dist)
pts_cam.append(np.squeeze(impoints))
pts_cam = np.array(pts_cam)
return pts_cam
def random_projects_points(npoints, edge_length, cameras_calib):
pts_3d = (np.random.rand(npoints, 3) - 0.5) * 2
pts_3d[:, 2] += 1
pts_3d[:, 2] *= edge_length
pts_3d[:, 1] *= edge_length
pts_3d[:, 0] *= edge_length
pts_cam = projects_points(pts_3d, cameras_calib)
return pts_cam, pts_3d
def undistord_ncam_points(cameras_calib, pts_cam):
ncameras = len(cameras_calib)
if np.shape(pts_cam)[0] != ncameras:
raise IndexError(
'Cameras points should have the same length than cameras_calib')
if len(pts_cam.shape) == 2:
pts_cam = pts_cam[:, np.newaxis, :]
if len(pts_cam.shape) != 3:
raise IndexError('Camera points should be of shape 3')
for cam_i in range(ncameras):
dst = cv2.undistortPoints(pts_cam[cam_i, ...][np.newaxis, ...],
cameras_calib[cam_i]['intrinsic_matrix'],
cameras_calib[cam_i]['distortion'])
pts_cam[cam_i, ...] = np.squeeze(dst)
return pts_cam
def triangulate_pair(cameras_calib, pts_cam, cam_indeces):
"""
Triangulate two points on two cameras
"""
cam_i = cam_indeces[0]
cam_j = cam_indeces[1]
point_4d_hom = cv2.triangulatePoints(cameras_calib[cam_i]['pose'][:3],
cameras_calib[cam_j]['pose'][:3],
pts_cam[cam_i][:, np.newaxis, :],
pts_cam[cam_j][:, np.newaxis, :])
point_4d = point_4d_hom / np.tile(point_4d_hom[-1, :], (4, 1))
return point_4d[:3, :].T
def triangulate_multiview_single_pts(cameras_calib, pts_cam):
"""
Methods for large scale SVD with missing values
Miklós Kurucz, András A. Benczúr, Károly Csalogány, 2007
"""
ncameras = len(cameras_calib)
if len(pts_cam) != ncameras:
raise IndexError(
'Cameras points should have the same length than cameras_calib')
# Undistord the camera point via the camera model
pts_cam = np.squeeze(undistord_ncam_points(cameras_calib, pts_cam))
#
A = np.zeros((4, ncameras * 2))
for cam_i in range(ncameras):
pose = cameras_calib[cam_i]['pose'][:3]
idx = 2 * cam_i
A[:, idx: (idx + 2)] = \
pts_cam[cam_i, :][np.newaxis, :] * pose[2, :][:, np.newaxis] -\
pose[0:2].transpose()
_, _, _, _, values = emsvd(A.transpose())
X = values[-1, :]
X = X / X[-1]
point3d = X[:3]
return point3d
def triangulate_multiview(cameras_calib, pts_cam):
npoints = np.shape(pts_cam)[1]
point3d = np.zeros((npoints, 3))
for p_i in range(npoints):
point3d[p_i, :] = triangulate_multiview_single_pts(
cameras_calib, pts_cam[:, p_i, :])
return point3d
def triangulate_ncam_pairwise(cameras_calib, pts_cam):
ncameras = len(cameras_calib)
if pts_cam.shape[0] != ncameras:
msg = 'Cameras points should have the same '
msg += 'length than cameras_calib {}!={}'.format(pts_cam.shape[0],
ncameras)
raise IndexError(msg)
# Undistord the camera point via the camera model
pts_cam = undistord_ncam_points(cameras_calib, pts_cam)
# Init variables
n_points = pts_cam.shape[1]
max_comb = int(ncameras * (ncameras - 1) / 2)
point_3d = np.nan * np.zeros((max_comb, 3, n_points))
nvalid_comb = np.zeros((1, n_points))
comb_i = 0
# Reconstruct pairwise (every combination)
for cam_i in range(ncameras):
for cam_j in range(cam_i + 1, ncameras):
cpoint_3d = triangulate_pair(
cameras_calib, pts_cam, [cam_i, cam_j])
cvalid_id = np.any(np.isnan(cpoint_3d) != 1, axis=1)
nvalid_comb[:, cvalid_id] += 1
point_3d[comb_i, :, cvalid_id] = cpoint_3d[cvalid_id, :]
comb_i += 1
return ((np.nansum(point_3d, axis=0) / np.tile(nvalid_comb, (3, 1))).T,
point_3d, nvalid_comb)
......@@ -87,7 +87,8 @@ the current and memorised place code.
# (M)
ridf = np.zeros((current.shape[1], current.shape[2]))
for azimuth_i in range(0, current.shape[1]):
rot_im = np.roll(current, azimuth_i, axis=1)
# Perform a counter clock wise rotation
rot_im = np.roll(current, -azimuth_i, axis=1)
ridf[azimuth_i, :] = np.squeeze(imagediff(rot_im, memory)) # rot_im
return ridf
......
......@@ -34,6 +34,23 @@ def convert_array(text):
return np.load(out)
def database2memory(source):
""" Load a database in the RAM of the machine
It can be useful to speed up certain calculation
However quite some memory may be used.
:param source: path to the databse to load into memory
:returns: a database
"""
db = DataBaseLoad(source)
dbmem = DataBaseSave(":memory:")
for row_id, posorient in db.posorients.iterrows():
scene = db.scene(posorient)
dbmem.write_image(posorient, np.squeeze(scene[..., 0]))
return dbmem
# Converts np.array to TEXT when inserting
sqlite3.register_adapter(np.ndarray, adapt_array)
# Converts TEXT to np.array when selecting
......@@ -65,7 +82,7 @@ class DataBase():
self._logger.exception(msg)
raise TypeError(msg)
_, ext = os.path.splitext(filename)
if ext != '.db':
if ext != '.db' and (filename != ':memory:'):
msg = 'filename must have the .db extension'
self._logger.exception(msg)
raise NameError(msg)
......@@ -116,12 +133,18 @@ class DataBase():
for col in self.normalisation_columns:
self.tablecolumns['normalisation'][col] = 'real'
if os.path.exists(filename):
if (self.create is False) or (os.path.exists(filename)):
self._logger.info('Connect to database')
self.db = sqlite3.connect(
'file:' + filename + '?cache=shared', uri=True,
detect_types=sqlite3.PARSE_DECLTYPES,
timeout=10)
if os.path.exists(filename) or filename == ':memory:':
self.db = sqlite3.connect(
'file:' + filename + '?cache=shared', uri=True,
detect_types=sqlite3.PARSE_DECLTYPES,
timeout=10)
else:
msg = 'Database {} does not exist'.format(filename)
self._logger.exception(msg)
raise NameError(msg)
self.db_cursor = self.db.cursor()
# Check table
self._logger.debug('Check tables')
......@@ -131,10 +154,12 @@ class DataBase():
named {}'.format(filename, tablename)
self._logger.exception(msg)
raise Exception(msg)
elif self.create:
else:
self._logger.info('Create to database')
self.db = sqlite3.connect(
filename, detect_types=sqlite3.PARSE_DECLTYPES)
'file:' + filename + '?cache=shared', uri=True,
detect_types=sqlite3.PARSE_DECLTYPES,
timeout=10)
self.db_cursor = self.db.cursor()
# Create table
self._logger.info('Create tables')
......@@ -146,10 +171,6 @@ class DataBase():
self.db_cursor.execute(
"create table {} {}".format(key, columns))
self.db.commit()
else:
msg = 'Database {} does not exist'.format(filename)
self._logger.exception(msg)
raise NameError(msg)
azimuth = np.deg2rad(np.linspace(-180, 180, 360))
elevation = np.deg2rad(np.linspace(-90, 90, 180))
......@@ -780,12 +801,12 @@ class DataBaseLoad(DataBase):
return denormed_im
class DataBaseSave(DataBase):
class DataBaseSave(DataBaseLoad):
def __init__(self, filename, channels=['R', 'G', 'B', 'D'],
arr_dtype=np.uint8):
"""
"""
DataBase.__init__(self, filename, channels=channels)
DataBaseLoad.__init__(self, filename, channels=channels)
self.arr_dtype = arr_dtype
@property
......
......@@ -17,12 +17,9 @@ def cartesian_to_spherical(x, y, z):
def spherical_to_cartesian(elevation, azimuth, radius=1):
cartesian = np.zeros_like(elevation)
cartesian = np.tile(cartesian[..., np.newaxis], (3,))
x = np.cos(elevation) * np.cos(azimuth)
y = np.cos(elevation) * np.sin(azimuth)
z = np.sin(elevation)
cartesian = radius * cartesian
x = radius*np.cos(elevation) * np.cos(azimuth)
y = radius*np.cos(elevation) * np.sin(azimuth)
z = radius*np.sin(elevation)
return x, y, z
......
......@@ -29,6 +29,8 @@ from multiprocessing import Queue, JoinableQueue, Process
import inspect
import navipy.moving.maths as navimomath
from navipy.database import DataBaseLoad
import time
import os
version = float(nx.__version__)
......@@ -49,27 +51,35 @@ class DefaultBrain():
raise NameError('No Callback')
def posorient_columns(convention):
return [('location', 'x'),
('location', 'y'),
('location', 'z'),
(convention, 'alpha_0'),
(convention, 'alpha_1'),
(convention, 'alpha_2')]
def velocities_columns(convention):
return [('location', 'dx'),
('location', 'dy'),
('location', 'dz'),
(convention, 'dalpha_0'),
(convention, 'dalpha_1'),
(convention, 'dalpha_2')]
class AbstractAgent():
def __init__(self, convention='rzyx'):
self._brain = DefaultBrain()
self._alter_posorientvel = defaultcallback
tuples = [('location', 'x'),
('location', 'y'),
('location', 'z'),
(convention, 'alpha_0'),
(convention, 'alpha_1'),
(convention, 'alpha_2')]
tuples = posorient_columns(convention)
index = pd.MultiIndex.from_tuples(tuples,
names=['position',
'orientation'])
self._posorient_col = index
tuples_vel = [('location', 'dx'),
('location', 'dy'),
('location', 'dz'),
(convention, 'dalpha_0'),
(convention, 'dalpha_1'),
(convention, 'dalpha_2')]
tuples_vel = velocities_columns(convention)
index_vel = pd.MultiIndex.from_tuples(tuples_vel,
names=['position',
'orientation'])
......@@ -159,22 +169,62 @@ class AbstractAgent():
return None
class CyberBeeAgent(AbstractAgent):
class CyberBeeAgent(AbstractAgent, Process):
"""
A CyberBeeAgent uses the rendering method of cyberbee. \
CyberBeeAgent is a close loop agent and need to be run within blender \
(see :doc:`rendering`).
Single process
Here come example of how to use it
Multi process
CyberBeeAgent inherit from the Process \
class of the multiprocessing module of the standard python \
library. Thus, several GridAgents can safely be run in parallel.
"""
def __init__(self, brain, convention):
def __init__(self, brain, convention,
posorients_queue=None,
results_queue=None):
if convention is None:
raise Exception("a convention must be specified")
if (posorients_queue is not None) and (results_queue is not None):
multiprocessing.Process.__init__(self)
AbstractAgent.__init__(self, convention)
AbstractAgent._alter_posorientvel = \
lambda motion_vec: navimomath.next_pos(motion_vec,
move_mode='free_run')
self._alter_posorientvel = \
lambda motion_vec: navimomath.next_pos(motion_vec,
move_mode='free_run')
self.brain = brain
self._posorients_queue = posorients_queue
self._results_queue = results_queue
def run(self):
""" Only supported when multiprocess"""
if self._posorients_queue is None or self._results_queue is None:
raise NameError('Single agent class has not be inititialised '
+ 'with multiprocessing suppport')
proc_name = self.name
print('Process {} started'.format(proc_name))
while True:
start_posorient = self._posorients_queue.get(timeout=1)
if start_posorient is None:
# Poison pill means shutdown)
break
common_id = list(set(start_posorient.index).intersection(
self._posorient_vel.index))
self._posorient_vel.loc[common_id] = start_posorient.loc[common_id]
self._posorient_vel.name = start_posorient.name
self.move()
posorient_vel = self._posorient_vel
self._posorients_queue.task_done()
self._results_queue.put(posorient_vel)
self._posorients_queue.task_done()
print('Process {} done'.format(proc_name))
class GridAgent(AbstractAgent, Process):
......@@ -317,6 +367,15 @@ the agent motion, or
self._graph.add_node(row_id,
posorient=posor)
self.mode_of_motion = mode_of_motion
# Create a dataframe to store the velocities
convention = self._brain.renderer.rotation_convention
tuples_posvel = posorient_columns(convention)
tuples_posvel.extend(velocities_columns(convention))
index_posvel = pd.MultiIndex.from_tuples(tuples_posvel,
names=['position',
'orientation'])
self.velocities = pd.DataFrame(columns=index_posvel,
index=list(self._graph.nodes()))
@property
def graph(self):
......@@ -329,28 +388,30 @@ the agent motion, or
self._graph = graph.copy()
self.check_graph()
def build_graph(self,
ncpu=5,
timeout=1):
def compute_velocities(self,
ncpu=5,
timeout=1,
filename=None,
blocksize=100):
if os.path.exists(filename):
self.velocities = pd.read_hdf(filename)
nodes_tocompute = self.velocities.isna().any(axis=1)
nodes_tocompute = nodes_tocompute[nodes_tocompute].index
# Build a list of nodes
results_edges = []
posorients_queue = JoinableQueue()
results_queue = Queue()
if version < 2:
graph_nodes = list(self._graph.nodes())
else:
graph_nodes = list(self._graph.nodes)
for node in graph_nodes:
for node in nodes_tocompute:
posorients_queue.put(self._graph.nodes[node]['posorient'])
# Start ndatabase loader
convention = self._brain.renderer.rotation_convention
num_agents = ncpu
agents = [GridAgent(copy.copy(self._brain),
posorients_queue=posorients_queue,
results_queue=results_queue)
agents = [CyberBeeAgent(copy.copy(self._brain),
convention=convention,
posorients_queue=posorients_queue,
results_queue=results_queue)
for _ in range(num_agents)]
for w in agents:
w.mode_of_motion = self.mode_of_motion
w.start()
# Add a poison pill for each agent
......@@ -359,12 +420,58 @@ the agent motion, or
# Wait for all of the tasks to finish
# posorients_queue.join()
for _ in range(nx.number_of_nodes(self._graph)):
result = results_queue.get(timeout=timeout)
results_edges.append((result[0].name,
result[1].name))
# print(results_edges[-1])
nline = 0
prev_nline = nline
t_start = time.time()
nbnodes = nodes_tocompute.shape[0]
for _ in range(nbnodes):
res = results_queue.get(timeout=timeout)
self.velocities.loc[res.name, res.index] = res
if (nline-prev_nline) > blocksize:
t_elapse = time.time()-t_start
t_peritem = t_elapse/nline
remain = nbnodes-nline
print('Computed {} in {}'.format(nline, t_elapse))
print('Remain {}, done in {}'.format(remain, remain*t_peritem))
if filename is not None:
self.velocities.to_hdf(filename, key='velocities')
prev_nline = nline
nline += 1
return self.velocities.copy()
def build_graph(self, movemode, moveparam):
"""
Connect edges with a given velocity
"""
if self.velocities.dropna().shape[0] == 0:
raise NameError('compute_velocities should be called first')
edges = pd.Series(data=np.nan, index=self.velocities.index)
# Make sure that the velocity start at the correct location
posorients = self._brain.posorients
myvelocities = self.velocities.copy()
myvelocities = myvelocities.swaplevel(axis=1)
myvelocities.x = posorients.x
myvelocities.y = posorients.y
myvelocities.z = posorients.z
myvelocities.alpha_0 = posorients.alpha_0
myvelocities.alpha_1 = posorients.alpha_1
myvelocities.alpha_2 = posorients.alpha_2
myvelocities = myvelocities.swaplevel(axis=1)
for ii, row in myvelocities.iterrows():
if np.any(np.isnan(row)):
continue
# Move according to user mode of motion
nposorient = navimomath.next_pos(row, movemode, moveparam)
# Snap to the closest point
nextpos_index = navimomath.closest_pos(
nposorient, myvelocities)
edges[ii] = nextpos_index.name
# Format for graph
validedges = edges.dropna()
results_edges = np.vstack(
[validedges.index,
validedges.values]).transpose()
# Add to graph
self._graph.add_edges_from(results_edges)
self.check_graph()
......@@ -419,10 +526,10 @@ the agent motion, or
# [0] because one node of the attractor is enough
# all other node of the attractor are connected to this one
target = list(att['attractor'])[0]
attractors[att_i]['paths'] = \
nx.shortest_path(self.graph, target=target)
attractors[att_i]['sources'] = \
list(attractors[att_i]['paths'].keys())
attractors[att_i]['paths'] = nx.shortest_path(
self.graph, target=target)
attractors[att_i]['sources'] = list(
attractors[att_i]['paths'].keys())
return attractors
def catchment_area(self, attractors=None):
......
<?xml version="1.0"?>
<opencv_storage>
<ncameras>2.</ncameras>
<!-- resumed -->
<pose_0 type_id="opencv-matrix">
<rows>4</rows>
<cols>4</cols>
<dt>d</dt>
<data>
-9.9975884141929716e-01 -1.9663083577624682e-02
-9.7786577894851904e-03 2.3960770101324730e+01
1.2596475289801174e-02 -8.7822032197929456e-01
4.7809036266469301e-01 -1.7450255985791409e+02
-1.7988546751139770e-02 4.7785189045017246e-01
8.7825621715930891e-01 -1.2689224150808659e+03 0. 0. 0. 1.</data></pose_0>
<!-- resumed -->
<intrinsic_matrix_0 type_id="opencv-matrix">
<rows>3</rows>
<cols>3</cols>
<dt>d</dt>
<data>
1.3854069839766423e+03 0. 1.0142369229097144e+03 0.
1.3860018990405542e+03 9.6972229918852702e+02 0. 0. 1.</data></intrinsic_matrix_0>
<!-- resumed -->
<distortion_0 type_id="opencv-matrix">
<rows>5</rows>
<cols>1</cols>
<dt>d</dt>
<data>
-1.6496133317304024e-01 7.6541851811227621e-02
1.4703239618294489e-04 -5.0753504623544379e-04
5.9159458153562314e-02</data></distortion_0>
<!-- resumed -->
<pose_1 type_id="opencv-matrix">
<rows>4</rows>
<cols>4</cols>
<dt>d</dt>
<data>
-9.9999925934757616e-01 4.6011102855678676e-05
-1.2162184333542050e-03 9.1657057770283288e+00
-4.6920052752197648e-05 -9.9999971964107015e-01
7.4733947436152800e-04 4.1419559776785363e+01
-1.2161837064630836e-03 7.4739598587578605e-04
9.9999898114769714e-01 -1.1865049751543168e+03 0. 0. 0. 1.</data></pose_1>
<!-- resumed -->
<intrinsic_matrix_1 type_id="opencv-matrix">
<rows>3</rows>
<cols>3</cols>
<dt>d</dt>
<data>
1.3877924464919429e+03 0. 1.0250625063928485e+03 0.
1.3892469602478291e+03 1.0633879325646449e+03 0. 0. 1.</data></intrinsic_matrix_1>
<!-- resumed -->
<distortion_1 type_id="opencv-matrix">
<rows>5</rows>
<cols>1</cols>
<dt>d</dt>
<data>
-1.6987468870812769e-01 1.0743242290093688e-01
1.9614281508288295e-04 -6.6638746010521906e-04
-8.9751502378932948e-03</data></distortion_1>
</opencv_storage>
......@@ -9,13 +9,13 @@ import tempfile
from navipy.scripts import parser_logger, args_to_logparam
# Following need to be imported in blender as well
from navipy.sensors.renderer import BlenderRender
from navipy.tools.trajectory import Trajectory
from navipy.trajectories import Trajectory
from navipy import logger
importwithinblender = [
'from navipy.sensors.renderer import BlenderRender',
'from navipy.tools.trajectory import Trajectory',
'from navipy.trajectories import Trajectory',
'from navipy import logger']
......@@ -24,9 +24,9 @@ def parser_blend_alongtraj():
parser = parser_logger()
arghelp = 'Path to the environment (.blend) in which your agent lives'
defaultworld = pkg_resources.resource_filename(
'navipy', 'resources/corridor.blend')
'navipy', 'resources/sample_experiment/Ravi_2018/corridor.blend')
defaulttraj = pkg_resources.resource_filename(
'navipy', 'resources/corridor_traj.csv')
'navipy', 'resources/sample_experiment/Ravi_2018/corridor_traj.csv')
defaultconfig = pkg_resources.resource_filename(
'navipy', 'resources/configs/BlenderRender.yaml')
defaultoutput = tempfile.NamedTemporaryFile().name
......
......@@ -8,11 +8,15 @@ import inspect
import pkg_resources
import tempfile
# Following need to be imported in blender as well
from navipy.tools.trajectory import Trajectory
from navipy.trajectories import Trajectory
from navipy.maths.homogeneous_transformations import compose_matrix
from navipy.maths.quaternion import matrix as quatmatrix
importwithinblender = [
'from navipy.tools.trajectory import Trajectory']
'from navipy.trajectories import Trajectory',
'from navipy.maths.homogeneous_transformations import compose_matrix',
'from navipy.maths.quaternion import matrix as quatmatrix']
def parser_blend_overlaytraj():
......@@ -20,9 +24,9 @@ def parser_blend_overlaytraj():
parser = argparse.ArgumentParser()
arghelp = 'Path to the environment (.blend) in which your agent lives'
defaultworld = pkg_resources.resource_filename(
'navipy', 'resources/corridor.blend')
'navipy', 'resources/sample_experiment/Ravi_2018/corridor.blend')
defaulttraj = pkg_resources.resource_filename(
'navipy', 'resources/corridor_traj.csv')
'navipy', 'resources/sample_experiment/Ravi_2018/corridor_traj.csv')
parser.add_argument('--blender-world',
type=str,
default=defaultworld,
......@@ -81,6 +85,62 @@ def run(trajfile):
#bpy.context.object.data.resolution_u = 3
#bpy.context.object.data.bevel_object = bpy.data.objects["NurbsCircle"]
# add frame
bpy.ops.object.empty_add(type='ARROWS',
radius=1,
view_align=False,
location=(0, 0, 0),
layers=(True, False, False, False, False,
False, False, False, False, False,
False, False, False, False, False,
False, False, False, False, False))
frameobject = bpy.context.object
frameobject.name = "camera_frame"
# find camera:
# to then move it on each frame
camera_found = False
for obj in bpy.context.scene.objects:
if obj.type == 'CAMERA':
camera = obj
camera_found = True
break
if not camera_found:
raise NameError('The blender file does not contain a camera')
# Remove animation to be able to move the camera
# and add animation
camera.animation_data_clear()
# set first and last frame
context = bpy.context
context.scene.frame_start = trajectory.index.min()
context.scene.frame_end = trajectory.index.max()
keyInterp = context.user_preferences.edit.keyframe_new_interpolation_type
context.user_preferences.edit.keyframe_new_interpolation_type = 'LINEAR'
convention = trajectory.rotation_mode
_renderaxis = '+x'
for frame_i, posorient in trajectory.iterrows():
context.scene.frame_current = frame_i
# Render
rotmat = compose_matrix(
angles=posorient.loc[convention],
translate=posorient.loc['location'],
axes=convention)
frameobject.matrix_world = rotmat.transpose()
if _renderaxis == '+x':
initrot = quatmatrix([0.5, 0.5, -0.5, -0.5])
# The camera is aligned in -z
# A rotation along z wll thus roll the camera
# Althougth the camera should yaw in such case
rotmat[:3, :3] = rotmat[:3, :3].dot(initrot[:3, :3])
# matrix_world in blender are column-major order
# and numpy row-major order
camera.matrix_world = rotmat.transpose()
camera.keyframe_insert(data_path='location', frame=(frame_i))
camera.keyframe_insert(data_path='rotation_euler', frame=(frame_i))
frameobject.keyframe_insert(data_path='location', frame=(frame_i))
frameobject.keyframe_insert(
data_path='rotation_euler', frame=(frame_i))
context.user_preferences.edit.keyframe_new_interpolation_type = keyInterp
def main():
# encoding for temporary file
......
#!/usr/bin/env python3
import fileinput
import matplotlib
filename = matplotlib.matplotlib_fname()
print('In file: {}'.format(filename))
text_to_search = 'backend '
replacement_text = '#backend '
print('replace {} by {}'.format(text_to_search, replacement_text))
with fileinput.FileInput(filename, inplace=True, backup='.bak') as file:
for line in file:
print(line.replace(text_to_search, replacement_text), end='')
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment