Skip to content
Snippets Groups Projects
Commit bc32828d authored by Olivier Bertrand's avatar Olivier Bertrand
Browse files

remove overview

parent 8cddcd72
No related branches found
No related tags found
No related merge requests found
Showing
with 0 additions and 7077 deletions
Brain
=====
Every agent comes with a brain processing the about of \
senses or sensors for biological or technical agent, respectively.
The senses of agents in navipy are limited
to:
* 4d vision (brighness + depth)
The 4d vision sense is controlled by rendering module, either an \
online rendering or loaded from a database containing pre-rendered images.
For example to use pre-rendered images from a database:
.. literalinclude:: examples/apcv.py
:lines: 10-11
Then the brain can be updated at a new position orientation:
.. literalinclude:: examples/apcv.py
:lines: 15
Building your own brain
-----------------------
The Brain class is an abstract Brain, such that it can not control an agent. \
To control, an agent, the Brain should have a function called velocity.
For example, an stationary agent should always return a null velocity.
.. literalinclude:: examples/static_brain.py
:lines: 3,9-17
An agent using an average skyline homing vector, could be build as follow
.. literalinclude:: examples/asv_brain.py
:lines: 4-5,11-36
Comparing
=========
Rotational image difference function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: examples/ridf.py
:lines: 4,20
.. plot:: overview/examples/ridf.py
Database
========
Database are generated by the rendering module, and contains all \
images and there corresponding position-orientations.
* position_orientation: containing all position and orientation \
of where images were rendered. The position-orientation is \
described by ['x','y','z','alpha_0','alpha_1','alpha_2']
* image: containing all images ever rendered. \
Each channel of each image are normalised, so to use the \
full coding range.
* normalisation: the normalisation constantes
How to load a database
----------------------
.. literalinclude:: examples/get_posorients.py
:lines: 8
How to load all position-orientation
------------------------------------
The database contains all position-orientation \
at which an image as been rendered. In certain \
situation, it may be usefull to know all \
position-orientation in the database. More technically \
speaking, loading the full table of position-orientaiton.
.. literalinclude:: examples/get_posorients.py
:lines: 9-10
.. ipython:: examples/get_posorients.py
:verbatim:
How to load an image
--------------------
The database contains images which can be processed differently \
depending on the navigation strategy beeing used.
Images are at given position-orientations. To load an image \
the position-orientation can be given. The DataBaseLoader will \
look if this position-orientation has been rendered. If it is \
the case, the image will be returned.
.. literalinclude:: examples/load_image_posorient.py
:lines: 14-23
.. plot:: overview/examples/load_image_posorient.py
However, looking in the database if an image has already been \
rendered at a given position-orientation can cost time. To speed up \
certain calculation, image can instead be access by row number. \
Indeed each position-orientation can be identified by a unique row \
number. This number is consistant through the entire database. Thus, \
an image can be loaded by providing the row number.
.. literalinclude:: examples/load_image_rowid.py
:lines: 13-15
.. plot:: overview/examples/load_image_rowid.py
.. todo: channels as part of database
Source diff could not be displayed: it is too large. Options to address this: view the blob.
Source diff could not be displayed: it is too large. Options to address this: view the blob.
import matplotlib.pyplot as plt
from navipy.database import DataBaseLoad
from navipy.processing import pcode
from navipy.processing import tools
from navipy.processing import constants
from navipy import Brain
import pkg_resources
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
mybrain = Brain(renderer=mydb)
# 2) Define the position-orinetation at which
# we want the image
posorient = mydb.posorients.loc[12, :]
mybrain.update(posorient)
my_apcv = pcode.apcv(mybrain.vision.scene,
mybrain.vision.viewing_directions)
my_apcv_sph = tools.cartesian_to_spherical(x=my_apcv[..., 0],
y=my_apcv[..., 1],
z=my_apcv[..., 2])
elevation = mydb.viewing_directions[...,
constants.__spherical_indeces__[
'elevation']]
azimuth = mydb.viewing_directions[...,
constants.__spherical_indeces__[
'azimuth']]
f, axarr = plt.subplots(1, 2, figsize=(15, 4))
f.show()
import numpy as np
import pandas as pd
from navipy.database import DataBaseLoad
from navipy.processing.pcode import apcv, skyline
from navipy import Brain
import pkg_resources
# 0) Define a class heriting from Brain
class ASVBrain(Brain):
def __init__(self, renderer=None,
channel=0, goalpos=[0, 0]):
Brain.__init__(self, renderer=renderer)
# Init memory
locid = self.posorients[(mydb.posorients.x == goalpos[0])
& (mydb.posorients.y == goalpos[1])].index[0]
posorient = mydb.posorients.loc[locid, :]
self.update(posorient)
self.channel = channel
self.memory = self.asv()
def asv(self):
skyl = skyline(self.vision.scene)
vector = apcv(skyl,
self.vision.viewing_directions)
return vector[..., self.channel, :]
def velocity(self):
homing_vector = self.memory - self.asv()
homing_vector = np.squeeze(homing_vector)
velocity = pd.Series(data=0,
index=['dx', 'dy', 'dz',
'dalpha_0', 'dalpha_1', 'dalpha_2'])
velocity[['dx', 'dy', 'dz']] = homing_vector
return velocity
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
mybrain = ASVBrain(renderer=mydb)
# 2) Define the position-orinetation at which
# we want the image
posorient = mydb.posorients.loc[12, :]
mybrain.update(posorient)
mybrain.velocity()
import matplotlib.pyplot as plt
from navipy.database import DataBaseLoad
import navipy.processing as processing
from navipy import Brain
import pkg_resources
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
mybrain = Brain(renderer=mydb)
# 2) Define the position-orinetation at which
# we want the image
posorient = mydb.posorients.loc[12, :]
mybrain.update(posorient)
my_contrast = processing.pcode.contrast_weighted_nearness(
mybrain.vision.scene)
f, axarr = plt.subplots(2, 2, figsize=(15, 8))
axarr = axarr.flatten()
for chan_i, chan_n in enumerate(mydb.channels):
ax = axarr[chan_i]
ax.imshow(my_contrast[:, :, chan_i, 0])
ax.set_title('channel: ' + chan_n)
ax.invert_yaxis()
f.show()
from navipy.database import DataBaseLoad
import pkg_resources
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
posorients = mydb.posorients
posorients.head(n=5)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from navipy.database import DataBaseLoad
import pkg_resources
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
# 2) Define the position-orinetation at which
# we want the image
posorient = pd.Series(index=['x', 'y', 'z',
'alpha_0', 'alpha_1', 'alpha_2'])
posorient.x = 0.0
posorient.y = -0.25
posorient.z = 3.0
posorient.alpha_0 = np.pi / 2
posorient.alpha_1 = 0
posorient.alpha_2 = 0
# 3) Load the image
image = mydb.scene(posorient=posorient)
image = image[..., 0]
# 4) plot the image
to_plot_im = image[:, :, :3]
to_plot_im -= to_plot_im.min()
to_plot_im /= to_plot_im.max()
to_plot_im = to_plot_im * 255
to_plot_im = to_plot_im.astype(np.uint8)
to_plot_dist = image[:, :, 3]
plt.subplot(1, 2, 1)
plt.imshow(to_plot_im)
plt.gca().invert_yaxis()
plt.subplot(1, 2, 2)
plt.imshow(to_plot_dist)
plt.gca().invert_yaxis()
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from navipy.database import DataBaseLoad
import pkg_resources
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
# 2) Define the position-orinetation at which
# we want the image
rowid = 12
# 3) Load the image
image = mydb.scene(rowid=rowid)
image = image[..., 0]
# 4) plot the image
to_plot_im = image[:, :, :3]
to_plot_im -= to_plot_im.min()
to_plot_im /= to_plot_im.max()
to_plot_im = to_plot_im * 255
to_plot_im = to_plot_im.astype(np.uint8)
to_plot_dist = image[:, :, 3]
plt.subplot(1, 2, 1)
plt.imshow(to_plot_im)
plt.gca().invert_yaxis()
plt.subplot(1, 2, 2)
plt.imshow(to_plot_dist)
plt.gca().invert_yaxis()
plt.show()
import matplotlib.pyplot as plt
from navipy.database import DataBaseLoad
import navipy.processing as processing
from navipy import Brain
import pkg_resources
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
mybrain = Brain(renderer=mydb)
# 2) Define the position-orinetation at which
# we want the image
posorient = mydb.posorients.loc[12, :]
mybrain.update(posorient)
my_contrast = processing.pcode.michelson_contrast(mybrain.vision.scene)
f, axarr = plt.subplots(2, 2, figsize=(15, 8))
axarr = axarr.flatten()
for chan_i, chan_n in enumerate(mydb.channels):
ax = axarr[chan_i]
ax.imshow(my_contrast[:, :, chan_i, 0])
ax.set_title('channel: ' + chan_n)
ax.invert_yaxis()
f.show()
# import matplotlib.pyplot as plt
from navipy.database import DataBaseLoad
from navipy.processing import pcode
from navipy import Brain
import pkg_resources
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
mybrain = Brain(renderer=mydb)
# 2) Define the position-orinetation at which
# we want the image
posorient = mydb.posorients.loc[12, :]
mybrain.update(posorient)
my_pcv = pcode.pcv(mybrain.vision.scene,
mybrain.vision.viewing_directions)
import numpy as np
import matplotlib.pyplot as plt
from navipy.database import DataBaseLoad
from navipy.comparing import rot_imagediff
import pkg_resources
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
# 2) Define the position-orinetation at which
# we want the image and get the scene
rowid = 12
my_scene_memory = mydb.scene(rowid=rowid)
rowid = 1
my_scene_current = mydb.scene(rowid=rowid)
# Calculate image rot diff
rotdf = rot_imagediff(my_scene_current, my_scene_memory)
rotdf = np.roll(rotdf, 180, axis=0)
f, axarr = plt.subplots(1, 2, figsize=(15, 4))
for chan_i, chan_n in enumerate(mydb.channels):
if chan_n == 'D':
color = 'k'
ax = axarr[1]
else:
color = chan_n
ax = axarr[0]
ax.plot(rotdf[:, chan_i], color=color)
f.show()
import numpy as np
import matplotlib.pyplot as plt
from navipy.database import DataBaseLoad
import pkg_resources
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
# 2) Define the position-orinetation at which
# we want the image and get the scene
rowid = 12
my_scene = mydb.scene(rowid=rowid)
f, axarr = plt.subplots(1, 2, figsize=(15, 4))
to_plot_im = my_scene[:, :, :3, 0]
to_plot_im -= to_plot_im.min()
to_plot_im /= to_plot_im.max()
to_plot_im = to_plot_im * 255
to_plot_im = to_plot_im.astype(np.uint8)
to_plot_dist = my_scene[:, :, 3, 0]
ax = axarr[0]
ax.imshow(to_plot_im)
ax.invert_yaxis()
ax = axarr[1]
ax.imshow(to_plot_dist)
ax.invert_yaxis()
f.show()
import matplotlib.pyplot as plt
from navipy.database import DataBaseLoad
from navipy.processing import pcode as processing
from navipy import Brain
import pkg_resources
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
mybrain = Brain(renderer=mydb)
# 2) Define the position-orinetation at which
# we want the image
posorient = mydb.posorients.loc[12, :]
mybrain.update(posorient)
my_skyline = processing.skyline(mybrain.vision.scene)
f, axarr = plt.subplots(1, 2, figsize=(15, 4))
for chan_i, chan_n in enumerate(mydb.channels):
if chan_n == 'D':
color = 'k'
ax = axarr[1]
else:
color = chan_n
ax = axarr[0]
ax.plot(my_skyline[0, :, chan_i, 0], color=color)
f.show()
import pandas as pd
from navipy.database import DataBaseLoad
from navipy import Brain
import pkg_resources
# 0) Define a class heriting from Brain
class StaticBrain(Brain):
def __init__(self, renderer=None):
Brain.__init__(self, renderer=renderer)
def velocity(self):
velocity = pd.Series(data=0,
index=['dx', 'dy', 'dz',
'dalpha_0', 'dalpha_1', 'dalpha_2'])
return velocity
# 1) Connect to the database
mydb_filename = pkg_resources.resource_filename(
'navipy', 'resources/database.db')
mydb = DataBaseLoad(mydb_filename)
mybrain = StaticBrain(renderer=mydb)
# 2) Define the position-orinetation at which
# we want the image
posorient = mydb.posorients.loc[12, :]
mybrain.update(posorient)
mybrain.velocity()
Overview
========
.. toctree::
:maxdepth: 2
brain
rendering
processing
comparing
moving
database
Moving
######
Overview
********
.. automodule:: navipy.moving
Summary
*******
.. automodule:: navipy.moving.agent
Inheritance
***********
.. inheritance-diagram:: navipy.moving.agent
Processing a scene
==================
An agent comes equipped with a battery of sensors, such as a camera \
depth estimation sensors, compass, and odometer. Here, we focus on the \
the processing of retino-topic information provided by a camera and a \
depth estimation sensor. This retino-topic information is refer as a scene.
image based scene (IBS)
A classical image. Each pixel is viewed in a direction
(elevation,azimuth) in a regular manner.
In that case the scene is a 4d numpy array
[elevation-index,azimuth-index,channel-index,1].
Omatidium based scene (OBS)
In an ommatidia based scene, the viewing direction
do not need to be regularally spaced.
In that case the scene is a 3d numpy array
[ommatidia-index, channel-index,1].
Place code
----------
Processing a scene yield to a certain encoding of information at the location \
where the scene was acquired, rendered, seen by the agent.
By extension a place-code is either image based or ommatidium based.
The number of dimension of an ib-place-code is always 4, and of an
ob-place-code always 3.
image based place-code (IBPC)
A place code derived from IBS. Each pixel is viewed in a direction
(elevation,azimuth) in a regular manner.
In that case the scene is a 4d numpy array
[elevation-index,azimuth-index,channel-index,component-index].
Omatidium based place-code (OBPC)
A place code derived from OBS, the viewing direction
do not need to be regularally spaced.
In that case the scene is a 3d numpy array
[ommatidia-index, channel-index,component-index].
Abusing the terminology of a place-code, a scene can be a place-code.
Therefore ibs and obs have 4 and 3 dimension, respectively.
Skyline
~~~~~~~
.. literalinclude:: examples/skyline.py
:lines: 16
.. plot:: overview/examples/skyline.py
Michelson-contrast
~~~~~~~~~~~~~~~~~~
.. literalinclude:: examples/michelson_contrast.py
:lines: 16
.. plot:: overview/examples/michelson_contrast.py
Contrast weighted nearness
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: examples/contrast_weighted_nearness.py
:lines: 17-18
.. plot:: overview/examples/contrast_weighted_nearness.py
Place code vectors
~~~~~~~~~~~~~~~~~~
.. literalinclude:: examples/pcv.py
:lines: 16-17
.. plot:: overview/examples/pcv.py
Average place code vectors
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: examples/apcv.py
:lines: 16-17
.. plot:: overview/examples/apcv.py
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment