Skip to content
Snippets Groups Projects
Commit 2f0ca967 authored by Olivier Bertrand's avatar Olivier Bertrand
Browse files

Restructure doc as overview, tutorials, references

parent 1adfb205
Branches
No related tags found
No related merge requests found
""" """
The Place comparator list different methods to Comparing
compare a current place to a memorised place or
memorised places.
""" """
import numpy as np import numpy as np
from navipy.processing.tools import is_ibpc, is_obpc from navipy.processing.tools import is_ibpc, is_obpc
...@@ -73,10 +71,7 @@ the current and memorised place code. ...@@ -73,10 +71,7 @@ the current and memorised place code.
..note: assume that the image is periodic along the x axis ..note: assume that the image is periodic along the x axis
(the left-right axis) (the left-right axis)
.. literalinclude:: example/comparing/ridf.py
:lines: 4,20
.. plot:: example/comparing/ridf.py
""" """
if not is_ibpc(current): # and not is_obpc(current): if not is_ibpc(current): # and not is_obpc(current):
raise TypeError('The current and memory place code\ raise TypeError('The current and memory place code\
......
""" """
Database are generated by the rendering module, and contains all \
images and there corresponding position-orientations.
* position_orientation: containing all position and orientation of where \
images were rendered. The position-orientation is described by \
['x','y','z','alpha_0','alpha_1','alpha_2']
* image: containing all images ever rendered. Each channel of each image \
are normalised, so to use the full coding range.
* normalisation: the normalisation constantes
How to load a database
----------------------
.. literalinclude:: example/database/get_posorients.py
:lines: 8
How to load all position-orientation
------------------------------------
The database contains all position-orientation \
at which an image as been rendered. In certain \
situation, it may be usefull to know all \
position-orientation in the database. More technically \
speaking, loading the full table of position-orientaiton.
.. literalinclude:: example/database/get_posorients.py
:lines: 9-10
.. plot:: example/database/get_posorients.py
How to load an image
--------------------
The database contains images which can be processed differently \
depending on the navigation strategy beeing used.
Images are at given position-orientations. To load an image \
the position-orientation can be given. The DataBaseLoader will \
look if this position-orientation has been rendered. If it is \
the case, the image will be returned.
.. literalinclude:: example/database/load_image_posorient.py
:lines: 14-23
.. plot:: example/database/load_image_posorient.py
However, looking in the database if an image has already been \
rendered at a given position-orientation can cost time. To speed up \
certain calculation, image can instead be access by row number. \
Indeed each position-orientation can be identified by a unique row \
number. This number is consistant through the entire database. Thus, \
an image can be loaded by providing the row number.
.. literalinclude:: example/database/load_image_rowid.py
:lines: 13-15
.. plot:: example/database/load_image_rowid.py
.. todo: channels as part of database
""" """
......
...@@ -5,7 +5,7 @@ A standard method to move an agent is to update: ...@@ -5,7 +5,7 @@ A standard method to move an agent is to update:
1. update the sensory information at the current agent location :math:`x` 1. update the sensory information at the current agent location :math:`x`
2. deduce the agent motion :math:`vdt` from this information 2. deduce the agent motion :math:`vdt` from this information
3. displace the agent by motion ( :math:`x \rightarrow x + vdt`) 3. displace the agent by motion ( :math:`x\\rightarrow x + vdt`)
The use of a close loop model including visual rendering is \ The use of a close loop model including visual rendering is \
......
""" """
+----------------+--------------+-------------+ +-------------------------------------------+\
|Agent class |Type of agent | Rendering + --------------+-------------+
+================+==============+=============+ |Agent class |\
|:CyberBeeAgent: |Close loop |Online | Type of agent | Rendering |
+----------------+ |-------------+ +===========================================+\
|:GraphAgent: | |Pre-rendered | ==============+=============+
+----------------+--------------+ | |:class:`navipy.moving.agent.CyberBeeAgent` |\
|:GridAgent: +Open loop | | Close loop |Online |
+----------------+--------------+-------------+ +-------------------------------------------+\
+-------------+
|:class:`navipy.moving.agent.GraphAgent` |\
|Pre-rendered |
+-------------------------------------------+\
--------------+ +
|:class:`navipy.moving.agent.GridAgent` |\
Open loop | |
+-------------------------------------------+\
--------------+-------------+
""" """
import numpy as np import numpy as np
import pandas as pd import pandas as pd
...@@ -122,7 +133,6 @@ class CyberBeeAgent(AbstractAgent): ...@@ -122,7 +133,6 @@ class CyberBeeAgent(AbstractAgent):
CyberBeeAgent is a close loop agent and need to be run within blender \ CyberBeeAgent is a close loop agent and need to be run within blender \
(see :doc:`rendering`). (see :doc:`rendering`).
bla
""" """
def __init__(self, brain): def __init__(self, brain):
......
""" """
An agent comes equipped with a battery of sensors, such as a camera \
depth estimation sensors, compass, and odometer. Here, we focus on the \
the processing of retino-topic information provided by a camera and a \
depth estimation sensor. This retino-topic information is refer as a scene.
image based scene (IBS)
A classical image. Each pixel is viewed in a direction
(elevation,azimuth) in a regular manner.
In that case the scene is a 4d numpy array
[elevation-index,azimuth-index,channel-index,1].
Omatidium based scene (OBS)
In an ommatidia based scene, the viewing direction
do not need to be regularally spaced.
In that case the scene is a 3d numpy array
[ommatidia-index, channel-index,1].
Place code
----------
Processing a scene yield to a certain encoding of information at the location \
where the scene was acquired, rendered, seen by the agent.
By extension a place-code is either image based or ommatidium based.
The number of dimension of an ib-place-code is always 4, and of an
ob-place-code always 3.
image based place-code (IBPC)
A place code derived from IBS. Each pixel is viewed in a direction
(elevation,azimuth) in a regular manner.
In that case the scene is a 4d numpy array
[elevation-index,azimuth-index,channel-index,component-index].
Omatidium based place-code (OBPC)
A place code derived from OBS, the viewing direction
do not need to be regularally spaced.
In that case the scene is a 3d numpy array
[ommatidia-index, channel-index,component-index].
Abusing the terminology of a place-code, a scene can be a place-code.
Therefore ibs and obs have 4 and 3 dimension, respectively.
Skyline Skyline
~~~~~~~ ~~~~~~~
.. autofunction:: navipy.processing.pcode.skyline .. autofunction:: navipy.processing.pcode.skyline
...@@ -61,10 +19,4 @@ Average place-code vector ...@@ -61,10 +19,4 @@ Average place-code vector
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: navipy.processing.pcode.apcv .. autofunction:: navipy.processing.pcode.apcv
Motion code
-----------
Optic flow
~~~~~~~~~~
.. autofunction:: navipy.processing.mcode.optic_flow
""" """
...@@ -20,11 +20,6 @@ def skyline(scene): ...@@ -20,11 +20,6 @@ def skyline(scene):
:returns: the skyline [1,azimuth,channel,1] :returns: the skyline [1,azimuth,channel,1]
:rtype: np.ndarray :rtype: np.ndarray
.. literalinclude:: example/processing/skyline.py
:lines: 16
.. plot:: example/processing/skyline.py
""" """
if not is_ibpc(scene): if not is_ibpc(scene):
raise TypeError('scene should be image based to compute a skyline') raise TypeError('scene should be image based to compute a skyline')
...@@ -49,11 +44,6 @@ and minimum of the local image intensity ...@@ -49,11 +44,6 @@ and minimum of the local image intensity
:returns: the michelson-contrast :returns: the michelson-contrast
:rtype: np.ndarray :rtype: np.ndarray
.. literalinclude:: example/processing/michelson_contrast.py
:lines: 16
.. plot:: example/processing/michelson_contrast.py
""" """
check_scene(scene) check_scene(scene)
if not is_ibpc(scene): if not is_ibpc(scene):
...@@ -88,11 +78,6 @@ def contrast_weighted_nearness(scene, contrast_size=3, distance_channel=3): ...@@ -88,11 +78,6 @@ def contrast_weighted_nearness(scene, contrast_size=3, distance_channel=3):
and minimum of the local image intensity in the michelson-contrast. and minimum of the local image intensity in the michelson-contrast.
:param distance_channel: the index of the distance-channel. :param distance_channel: the index of the distance-channel.
.. literalinclude:: example/processing/contrast_weighted_nearness.py
:lines: 17-18
.. plot:: example/processing/contrast_weighted_nearness.py
""" """
check_scene(scene) check_scene(scene)
if not isinstance(contrast_size, int): if not isinstance(contrast_size, int):
...@@ -121,11 +106,6 @@ def pcv(place_code, viewing_directions): ...@@ -121,11 +106,6 @@ def pcv(place_code, viewing_directions):
:returns: the place code vectors in cartesian coordinates :returns: the place code vectors in cartesian coordinates
:rtype: (np.ndarray) :rtype: (np.ndarray)
.. literalinclude:: example/processing/pcv.py
:lines: 16-17
.. plot:: example/processing/pcv.py
""" """
# print("place code shape",place_code.shape) # print("place code shape",place_code.shape)
if is_ibpc(place_code): if is_ibpc(place_code):
...@@ -171,11 +151,6 @@ def apcv(place_code, viewing_directions): ...@@ -171,11 +151,6 @@ def apcv(place_code, viewing_directions):
:returns: the average place-code vector :returns: the average place-code vector
:rtype: (np.ndarray) :rtype: (np.ndarray)
.. literalinclude:: example/processing/apcv.py
:lines: 16-17
.. plot:: example/processing/apcv.py
""" """
check_scene(place_code) check_scene(place_code)
check_viewing_direction(viewing_directions) check_viewing_direction(viewing_directions)
......
""" """
.. literalinclude:: example/rendering/blenddemo_beesampling.py Bee sampler / database creator
:lines: 6
With the toolbox at disposition we just need to configure the \
BeeSampling to render images on a regular 3D grid.
.. literalinclude:: example/rendering/blenddemo_beesampling.py
:lines: 9
.. literalinclude:: example/rendering/blenddemo_beesampling.py
:lines: 12-19
If we want to use the distance to objects, we need to tell the \
BeeSampling what is the maximum distance to objects in the environment.\
Otherwise the distance can go until infinity, and since the image are \
compressed in the database, all distance to object will be equal to \
zero:
.. literalinclude:: example/rendering/blenddemo_beesampling.py
:lines: 23-24
Finally we can generate the database.
.. literalinclude:: example/rendering/blenddemo_beesampling.py
:lines: 28-29
""" """
import warnings import warnings
try: try:
......
""" """
Navipy & blender Renderer
----------------
What is blender?
~~~~~~~~~~~~~~~~
Explain blender
Create a world
~~~~~~~~~~~~~~
Explain How to create env for navipy
Using navipy in blender
~~~~~~~~~~~~~~~~~~~~~~~
Blender comes with its own python installation. Thus, we need to \
tell blender to use our virtualenv where the navigation toolbox \
is installed. To do we need to import the os module
.. literalinclude:: blender_run.py
:lines: 6 - 7
then activate the environment by using the following function:
.. literalinclude:: blender_run.py
:lines: 13 - 18
here venv_path is the path to the virtual environment within which \
navipy has been installed.
Now, blender can import all modules used by the navigation toolbox.
How to run python code with blender:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>> blender path/to/world.blend --background --python path/to/code.py
How to generate a database using blender
----------------------------------------
.. automodule:: navipy.sensors.bee_sampling
Custom sampling
---------------
.. autoclass:: navipy.sensors.renderer.BlenderRender
Rendering classes
-----------------
.. autoclass:: navipy.sensors.bee_sampling.BeeSampling
:members:
.. autoclass:: navipy.sensors.renderer.BlenderRender
:members:
""" """
import warnings import warnings
try: try:
...@@ -70,20 +23,6 @@ class BlenderRender(): ...@@ -70,20 +23,6 @@ class BlenderRender():
The Bee eye is a panoramic camera with equirectangular projection The Bee eye is a panoramic camera with equirectangular projection
The light rays attaining the eyes are filtered with a gaussian. The light rays attaining the eyes are filtered with a gaussian.
.. literalinclude:: example/rendering/blenddemo_cyberbee.py
:lines: 5
With the toolbox at disposition we just need to configure the \
Cyberbee to render images at desired positions.
.. literalinclude:: example/rendering/blenddemo_cyberbee.py
:lines: 8-13
To render a scene at a given positions we just have to do:
.. literalinclude:: example/rendering/blenddemo_cyberbee.py
:lines: 14-22
""" """
def __init__(self): def __init__(self):
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment