PyMSES

Table Of Contents

Previous topic

Analysis module

Next topic

Utilities package

This Page

Visualization module

pymses.analysis.visualization — Visualization module

class Camera(center=None, line_of_sight_axis='z', up_vector=None, region_size=[1.0, 1.0], distance=0.5, far_cut_depth=0.5, map_max_size=1024, log_sensitive=True, perspectiveAngle=0)

Camera class for 2D projected maps computing. Take a look at documentation figures to get a clearer definition.

Parameters :

center : region of interest center coordinates (default value is [0.5, 0.5, 0.5],

the simulation domain center).

line_of_sight_axis : axis of the line of sight (z axis is the default_value)

[ux, uy, uz] array or simulation domain specific axis key “x”, “y” or “z”

up_vector : direction of the y axis of the camera (up). If None, the up vector is set

to the z axis (or y axis if the line-of-sight is set to the z axis). If given a not zero-normed [ux, uy, uz] array is expected (or a simulation domain specific axis key “x”, “y” or “z”).

region_size : projected size of the region of interest (default (1.0, 1.0))

distance : distance of the camera from the center of interest (along the line-of-sight

axis, default 0.5).

far_cut_depth : distance of the background (far) cut plane from the center of interest

(default 0.5). The region of interest is within the camera position and the far cut plane.

map_max_size : maximal resolution of the camera (default 1024 pixels)

log_sensitive : whether the camera pixels are log sensitive or not (default True).

perspectiveAngle : (default 0 = isometric view) angle value in degree which can be used to

transfom the standard pymses isometric view into a perspective view. Take a look at documentation figures to get a clearer definition.

Examples

>>> cam  = Camera(center=[0.5, 0.5, 0.5], line_of_sight_axis='z', region_size=[1., 1.], \
... distance=0.5, far_cut_depth=0.5, up_vector='y', map_max_size=512, log_sensitive=True)
contains_camera(cam)
Parameters :

An other camera object :

Returns :

Boolean : True if data needed for this camera view include all data

needed for the camera view given in argument.

copy()

Returns a copy of this camera.

deproject_points(uvw_points, origins=None)

Return xyz_coords deprojected coordinates of a set of points from given [u,v,w] coordinates : - (u=0,v=0, w=0) is the center of the camera. - v is the coordinate along the vaxis - w is the depth coordinate of the points along the line-of-sight of the camera. if origins is True, perform a vectorial transformation of the vectors described by uvw_points anchored at positions ‘origins’

classmethod from_HDF5(h5f)

Returns a camera from a HDF5 file.

classmethod from_csv(csv_file)

Returns a camera from a csv (Comma Separated Values) file.

get_3D_right_eye_cam(z_fixed_point=0.0, ang_deg=1.0)

Get the 3D right eye camera for stereoscopic view, which is made from the original camera with just one rotation around the up vector (angle ang_deg)

Parameters :

ang_deg : float

angle between self and the returned camera (in degrees, default 1.0)

z_fixed_point : float

position (along w axis) of the fixed point in the right eye rotation

Returns :

right_eye_cam : the right eye Camera object for 3D image processing

get_bounding_box()

Returns the bounding box of the region of interest in the simulation domain corresponding of the area covered by the camera

get_camera_axis()

Returns the camera u, v and z axis coordinates

get_map_box(reduce_u_v_to_PerspectiveRatio=False)

Returns the (0.,0.,0.) centered cubic bounding box of the area covered by the camera Parameters ———- reduce_u_v_to_PerspectiveRatio : boolean (default False)

take into account the camera.perspectiveAngle if it is defined to make a perspective projection. This reduce the map u and v (i.e.horizontal and vertical) size with the perspective ratio.
get_map_mask(float32=True)

Returns the mask map of the camera. each pixel has an alpha : * 1, if the ray of the pixel intersects the simulation domain * 0, if not Parameters ———- float32 Boolean (default True)

use “float32” numpy dtype array instead of float64 to save memory (when float type is not needed, int8 type will be used anyway)
get_map_size()
Returns :

(nx, ny) : (int, int) tuple

the size (nx,ny) of the image taken by the camera (pixels)

get_pixel_surface()

Returns the surface of any pixel of the camera

get_pixels_coordinates_edges(take_into_account_perspective=False)

Returns the edges value of the camera pixels x/y coordinates The pixel coordinates of the center of the camera is (0,0)

get_rays()

Returns ray_vectors, ray_origins and ray_lengths arrays for ray tracing ray definition

get_region_size_level()

Returns the level of the AMR grid for which the cell size ~ the region size

get_required_resolution()
Returns :

lev : int

the level of refinement up to which one needs to read the data to compute the projection of the region of interest with the specified resolution.

get_slice_points(z=0.0)

Returns the (x, y, z) coordinates of the points contained in a slice plane perpendicular to the line-of-sight axis at a given position z.

z — slice plane position along line-of-sight (default 0.0 => center of the region)

printout()

Print camera parameters in the console

project_points(points, take_into_account_perspective=False)

Return a (coords_uv, depth) tuple where ‘coord_uv’ is the projected coordinates of a set of points on the camera plane. (u=0,v=0) is the center of the camera plane. ‘depth’ is the depth coordinate of the points along the line-of-sight of the camera. Parameters ———- points : numpy array of floats

array of points(x,y,z) coordinates to project
take_into_account_perspective : boolean (default False)
take into account the camera.perspectiveAngle if it is defined to make a perspective projection.
rotate_around_up_vector(ang_deg=1.0)
save_HDF5(h5f)

Saves the camera parameters into a HDF5 file.

save_csv(csv_file)

Saves the camera parameters into a csv (Comma Separated Values) file.

set_perspectiveAngle(perspectiveAngle=0)

Set the perspectiveAngle (default 0 = isometric view) angle value in degree which can be used to transfom the standard pymses isometric view into a perspective view.

similar(cam)

Draftly test if a camera is roughly equal to an other one, just to know in the amrviewer GUI if we need to reload data or not.

viewing_angle_rotation()

Returns the rotation corresponding to the viewing angle of the camera

viewing_angle_transformation()

Returns the transformation corresponding to the viewing angle of the camera

save_map_HDF5(map, camera, unit=None, scale_unit=None, hdf5_path='./', map_name='my_map', float32=True, save_map_mask=True)

Saves the map and the camera into a HDF5 file Parameters ———- map numpy array

map to save
camera Pymses camera
camera associated with the map to save
unit pymses unit (default None)
map unit
scale_unit float? (default None)
scale unit
hdf5_path string (default ”./”)
path of the hdf5 file to create
map_name string (default “my_map”)
name of the map
float32 Boolean (default True)
use “float32” numpy dtype array instead of float64 to save memory
save_map_mask Boolean (default True)
save camera map_mask or not (which is used to set transparency when there is no intersection with the simulation domain)
save_HDF5_to_plot(h5fname, img_path=None, axis_unit=None, map_unit=None, cmap='jet', cmap_range=None, fraction=None, save_into_png=True, discrete=False, verbose=True)

Function that plots the map with axis + colorbar from an HDF5 file

Parameters :

h5fname : the name of the HDF5 file containing the map

img_path : the path in wich the plot img file is to be saved

axis_unit : a (length_unit_label, axis_scale_factor) tuple containing :

  • the label of the u/v axes unit
  • the scaling factor of the u/v axes unit, or a Unit instance

map_unit : a (map_unit_label, map_scale_factor) tuple containing :

  • the label of the map unit
  • the scaling factor of the map unit, or a Unit instance

cmap : a Colormap object or any default python colormap string

cmap_range : a [vmin, vmax] array for map values clipping (linear scale)

fraction : fraction of the total map values below the min. map range (in percent)

save_into_png: whether the plot is saved into an png file or not (default True) :

discrete : wheter the map values are discrete integer values (default False). for colormap

save_HDF5_to_img(h5fname, img_path=None, cmap='jet', cmap_range=None, fraction=None, discrete=False, ramses_output=None, ran=None, adaptive_gaussian_blur=False, drawStarsParam=None, verbose=True, log_sensitive=None, alpha_map_mask=True)

Function that plots, from an HDF5 file, the map into a Image and saves it into a PNG file

Parameters :

h5fname : string

the name of the HDF5 file containing the map

img_path : string (default value)

the path in wich the img file is to be saved. the image is returned (and not saved) if left to None (default value)

cmap : string or Colormap object (default “jet”)

colormap to use

cmap_range : [vmin, vmax] array (default None)

value range for map values clipping (linear scale)

fraction : float (default None)

fraction of the total map values below the min. map range (in percent)

discrete : boolean (default False)

whether the colormap must be integer values only or not.

ramses_output : Ramses_output

specify ramses output for additional csv star file (look for a “sink_%iout.csv” file with 3D coordinates in output directory) to add stars on the image

ran : boolean or (float, float) (default None)

specify map range value to fix colormap during a movie sequence (same as the “Colormap range” values printed in console when verbose=True)

adaptive_gaussian_blur : boolean (default False)

experimental : compute local image resolution and apply an adaptive gaussian blur to the image where it is needed (usefull to avoid AMR big pixels with ray tracing technique) For rotated view : give the levelmax map in this parameter to get the good local image resolution

drawStarsParam : DrawStarsParameters (default None)

if ramses_output is specified and if a star file is found, this may be used to specify some parameters

verbose : boolean (default True)

if True, print colormap range in console.

log_sensitive : boolean (default None)

apply logarithmic value, if not precise, this code use the hdf5 camera.log_sensitive value to decide

alpha_map_mask : boolean (default True)

use the camera map_mask for the alpha band : ray that doesn’t intersect the simulation domain are see-through (alpha = 0)

Returns :

——- :

img : PIL Image

if img_path is left to None

ran = (vmin, vmax) :

if img_path is specified : return the Colormap range that can be used as a ran parameter for future images

save_HDF5_seq_to_img(h5f_iter, *args, **kwargs)

fraction : fraction (percent) of the total value of the map above the returned vmin value (default 1 %)

get_map_range(map, log_sensitive=False, cmap_range=None, fraction=None)

Map range computation function. Computes the linear/log (according to the map values scaling) scale map range values of a given map :

  • if a user-defined cmap_range is given, then it is used to compute the map range values
  • if not, the map range values is computed from a fraction (percent) of the total value of the map parameter. the min. map range value is defined as the value below which there is a fraction of the map (default 1 %)
Parameters :

map : 2D map from wich the map range values are computed

log_sensitive : whether the map values are log-scaled or not (True or False)

cmap_range : user-defined map range values (linear scale)

fraction : fraction of the total map values below the min. map range (in percent)

Returns :

map_range : [float, float]

the map range values [ min, max]

apply_log_scale(map, verbose=True)

Used to apply log-scale map if the camera captors are log-sensitive. Takes care of null and negative values warning

Parameters :map : original numpy array of map values
Returns :map : ~ numpy.log10(map) (takes care of null and negative values)
class DrawStarsParameters(adapt_intensity=True, rgb=[255, 255, 255], PSF=True, RT_instensity_dimming=False)

Utility class to store parameters for the draw_stars function

Parameters :

adapt_intensity : boolean

Whether to adpat the intensity or not with the sink csv 4th colum

rgb : [R, G, B] list of int

list of 3 integers between 0 and 255 corresponding to a RGB color

PSF : boolean or Colormap object

colormap to use

RT_instensity_dimming : boolean

experimental : this option add a ray tracing pass on data to compute star intensity dimming

class Operator(scalar_func_dict, is_max_alos=False, use_cell_dx=False)

Base Operator generic class

class ScalarOperator(scalar_func)

ScalarOperator class

Parameters :

scalar_func : function

single dset argument function returning the scalar data array from this dset Dataset.

Examples

>>> # Density field scalar operator
>>> op = ScalarOperator(lambda dset: dset["rho"])
class FractionOperator(num_func, denom_func)

FractionOperator class

Parameters :

up_func : function

numerator function like scalar_func (see ScalarOperator)

down_func : function

denominator function like scalar_func (see ScalarOperator)

Examples

>>> # Mass-weighted density scalar operator
>>> num = lambda dset: dset["rho"]    * dset.get_sizes()**3
>>> den = lambda dset: dset["rho"]**2 * dset.get_sizes()**3
>>> op = FractionOperator(num, den)

I = \frac{\int\limits_{V} \rho \times \rho \mathrm{d}V}{\int\limits_{V} \rho \mathrm{d}V}

class MaxLevelOperator

Max. AMR level of refinement operator class

SliceMap(source, camera, op, z=0.0, interpolation=False, use_C_code=True, use_openCL=False, verbose=False)

Compute a map made of sampling points

Parameters :

source : Source

data source

camera : Camera

camera handling the view parameters

op : Operator

data sampling operator

z : float

position of the slice plane along the line-of-sight axis of the camera

interpolation : boolean (default False)

Experimental : A proper bi/tri-linear interpolation could be great! THIS IS NOT IMPLEMENTED YET : in this attempt we supposed corner cell data while ramses use centered cell data, letting alone the problem of different AMR level...

use_C_code : boolean (default True)

The pure C code is slightly faster than the (not well optimized) Cython code, and should give the same result

use_openCL : boolean (default False)

Experimental : use “pyopencl” http://pypi.python.org/pypi/pyopencl

verbose : boolean (default False)

some console printout...

Returns :

——- :

map : array

sliced map

pymses.analysis.visualization.fft_projection — FFT-convolved map module

class MapFFTProcessor(source, info, ker_conv=None, pre_flatten=False, remember_data=False, cache_dset={}, use_camera_lvlmax=True)

MapFFTProcessor class Parameters ———- source : Source

data source
info : dict
RamsesOutput info dict.
ker_conv : :class:`~pymses.analysis.visualization.ConvolKernel’ (default None leads to use a GaussSplatterKernel)
Convolution kernel used for the map processing
pre_flatten : boolean (default False)
Option to flatten the data source (using multiprocessing if possible) before computing the map The filtered data are then saved into the “self.filtered_source” source attribute.
remember_data : boolean (default False)
Option which uses a “self.cache_dset” dictionarry attribute as a cache to avoid reloading dset from disk. This uses a lot of memory as it currently remembers a active_mask by levelmax filtering for each (dataset, levelmax) couple
cache_dset : : python dictionary (default {})
Cache dsets dictionnary reference, used only if remember_data == True, to share the same cache between various MapFFTProcessor. It is a dictionary of PointDatasets created with the CellsToPoints filter, referenced by [icpu, lmax] where icpu is the cpu number and lmax is the max AMR level used.
use_camera_lvlmax : boolean (default True)
Limit the transformation of the AMR grid to particles to AMR cells under the camera octree levelmax (so that visible cells are only the ones that have bigger size than the camera pixel size). Set this to False when using directly particle data from ”.part” particles files (dark matter and stars particles), so as to get the cache_dset working without the levelmax specification
prepare_data(camera, field_list=None)

prepare data method : it computes the “self.filtered_source” source attribute for the process(...) method. Load data from disk or from cache if remember_data option is activated. The data are then filtered with the CameraFilter class This uses multiprocessing if possible. Parameters ———- camera : Camera

camera containing all the view params, the filtering is done according to those param
field_list list of strings
list of AMR data fields needed to be read
process(op, camera, surf_qty=False, multiprocessing=True, FFTkernelSizeFactor=1, data_already_prepared=False, random_shift=False, stars_age_instensity_dimming=False)

Map processing method

Parameters :

op : Operator

physical scalar quantity data operator

camera : Camera

camera containing all the view params

surf_qty : boolean (default False)

whether the processed map is a surface physical quantity. If True, the map is divided by the surface of a camera pixel.

multiprocessing : boolean (default True)

try to use multiprocessing to compute both of the FractionOperator’s “top” and “down” FFT maps in parallel

FFTkernelSizeFactor : int or float (default 1)

allow to change the convolution kernel size by a multiply factor to adjust points size

data_already_prepared : boolean (default False)

set this option to true if you have already called the prepare_data() method : this method will then simply used it’s “self.filtered_source” source attribute without computing it again

random_shift : boolean (default False)

add a random shift to point positions to avoid seeing the grid on resulting image

stars_age_instensity_dimming : boolean (default False)

Requires the “epoch” field. Make use of this formula :

if star_age < 10 Million years (Myr) : intensity_weights = operator_weights (young stars are normally bright) else : intensity_weights = operator_weights * [star_age/10**6 Myr]**-0.7 (intensity dimming with years old)

Returns :

map : array

FFT-convolved processed map

class ConvolKernel(ker_func, size_func=None, max_size=None)

Convolution kernel class

convol_fft(map_dict, cam_dict)

FFT convolution method designed to convolute a dict. of maps into a single map

map_dict : map dict. where the dict. keys are the size of the convolution kernel. cam_dict : ExtendedCamera dict. corrsponding to the different maps of the map dict.

get_size(dset)
class GaussSplatterKernel(size_func=None, max_size=None)

2D Gaussian splatter convolution kernel

class Gauss1DSplatterKernel(axis, size_func=None, max_size=None)

2D Gaussian splatter convolution kernel

class PyramidSplatterKernel(size_func=None, max_size=None)

2D pyramidal splatter convolution kernel

class Cos2SplatterKernel(size_func=None, max_size=None)

2D Squared cosine splatter convolution kernel

pymses.analysis.visualization.raytracing — Ray-tracing module

class RayTracer(ramses_output, field_list)

RayTracer class

Parameters :

ramses_output : RamsesOutput

ramses output from which data will be read to compute the map

field_list : list of string

list of all the required AMR fields to read (see amr_source())

process(op, camera, surf_qty=False, verbose=True, multiprocessing=True, source=None, use_hilbert_domain_decomp=True, use_C_code=True, use_bottom_up=False)

Map processing method : ray-tracing through data cube

Parameters :

op : Operator

physical scalar quantity data operator

camera : Camera

camera containing all the view params

surf_qty : boolean (default False)

whether the processed map is a surface physical quantity. If True, the map is divided by the surface of a camera pixel.

verbose : boolean (default False)

show more console printouts

multiprocessing : boolean (default True)

try to use multiprocessing (process cpu data file in parallel) to speed up the code (need more RAM memory, python 2.6 or higher needed)

source : class:~pymses.sources... (default None)

Optional : The source to process may be specified here if you want to reuse a CameraOctreeDatasource already loaded in memory for example (see pymses/bin/pymses_tf_ray_tracing.py)

use_hilbert_domain_decomp : boolean (default True)

If False, iterate on the whole octree for each cpu file(instead of iterating on the cpu minimal domain decomposition, which is faster)

use_C_code : boolean (default True)

Our pure C code is faster than the (not well optimized) Cython code, and should give the same result

use_bottom_up : boolean (default False)

Force the use of the bottom-up algorithm instead of the classic top-down on the octree. Use the “neighbors” array. DOESN’T WORK YET

class OctreeRayTracer(*args)

RayTracerDir class

Parameters :

ramses_output : RamsesOutput

ramses output from which data will be read to compute the map

field_list : list of string

list of all the required AMR fields to read (see amr_source())

process(op, camera, surf_qty=False, return_image=True, rgb=True, use_C_code=True, use_openCL=False, dataset_already_loaded=False, reload_scalar_field=False)

Map processing method : directional ray-tracing through AMR tree

Parameters op : Operator

physical scalar quantity data operator
camera : Camera
camera containing all the view params
surf_qty : boolean (default False)
whether the processed map is a surface physical quantity. If True, the map is divided by the surface of a camera pixel.
return_image : boolean (default True)
if True, return a PIL image (when rgb option is also True), else it returns a numpy array map
rgb : boolean (default True)
if True, this code use the camera.color_tf to compute a rgb image if False, this code doesn’t use the camera.color_tf, and works like the standard RayTracer. Then it returns two maps : the requested map, and the AMR levelmax map
use_C_code : boolean (default True)
Our pure C code is faster than the (not well optimized) Cython code, and should give the same result
use_openCL : boolean (default False)
Experimental : use “pyopencl” http://pypi.python.org/pypi/pyopencl
dataset_already_loaded : boolean (default False)
Flag used with use_openCL=True to avoid reloading a dataset on the device
reload_scalar_field : boolean (default False)
Flag used with use_openCL=True and dataset_already_loaded=True to avoid reloading the dataset structure on the device while using a different scalar field
class RayTracerMPI(ramses_output, field_list, remember_data=False)

RayTracer class

Parameters :

ramses_output : RamsesOutput

ramses output from which data will be read to compute the map

field_list : list of string

list of all the required AMR fields to read (see amr_source())

remember_data : boolean (default False)

option to remember dataset loaded. Avoid reading the data again for each frame of a rotation movie. WARNING : The saved cache data don’t update yet it’s levelmax and cpu list, so use carefully this

if zooming / moving too much inside the simulation box.

process(op, camera, surf_qty=False, use_balanced_cpu_list=False, testing_ray_number_max=100, verbose=False, use_C_code=True)

Map processing method using MPI: ray-tracing through data cube

Parameters :

op : Operator

physical scalar quantity data operator

camera : Camera

camera containing all the view params

surf_qty : boolean (default False)

whether the processed map is a surface physical quantity. If True, the map is divided by the surface of a camera pixel.

use_balanced_cpu_list : boolean (default False)

option to optimize the load balancing between MPI process, add an intial dsets testing before processing every rays

testing_ray_number_max : boolean (default 100)

number of testing ray for the balanced cpu list option

verbose : boolean (default False)

more printout (may flood the console out for big simulation with many cpu)

use_C_code : boolean (default True)

Our pure C code is faster than the (not well optimized) Cython code, and should give the same result