Skip to content
/ feltor Public
forked from feltor-dev/feltor

Numerical methods for edge and SOL blob and turbulence simulations. Documentation at:

License

Notifications You must be signed in to change notification settings

jgbit/feltor

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to the FELTOR project!

3dsimulation

FELTOR (Full-F ELectromagnetic code in TORoidal geometry) is both a numerical library and a scientific software package built on top of it.

Its main physical target are plasma edge and scrape-off layer (gyro-)fluid simulations. The numerical methods centre around discontinuous Galerkin methods on structured grids. Our core level functions are parallelized for a variety of hardware from multi-core cpu to hybrid MPI+GPU, which makes the library incredibly fast.

DOI License: MIT

1. Quick start guide

Go ahead and clone our library into any folder you like

$ git clone https://github.com/feltor-dev/feltor

You also need to clone thrust and cusp distributed under the Apache-2.0 license. So again in a folder of your choice

$ git clone https://github.com/thrust/thrust
$ git clone https://github.com/cusplibrary/cusplibrary

Our code only depends on external libraries that are themselves openly available. We note here that we do not distribute copies of these libraries.

Now you need to tell the feltor configuration where these external libraries are located on your computer. The default way to do this is to go in your HOME directory, make an include directory and link the paths in this directory:

$ cd ~
$ mkdir include
$ cd include
$ ln -s path/to/thrust/thrust thrust
$ ln -s path/to/cusplibrary/cusp cusp

If you do not like this, you can also create your own config file as discribed here.

Now let us compile the first benchmark program.

$ cd path/to/feltor/inc/dg

$ make blas_b device=omp #(for an OpenMP version)
#or
$ make blas_b device=gpu #(if you have a gpu and nvcc )

The minimum requirement to compile and run an application is a working C++ compiler (g++ per default) and a CPU. To simplify the compilation process we use the GNU Make utility, a standard build automation tool that automatically builds the executable program. We don't use new C++-11 standard features to avoid complications since some clusters are a bit behind on up-to-date compilers. The OpenMP standard is natively supported by most recent C++ compilers.
Our GPU backend uses the Nvidia-CUDA programming environment and in order to compile and run a program for a GPU a user needs the nvcc CUDA compiler (available free of charge) and a NVidia GPU. However, we explicitly note here that due to the modular design of our software a user does not have to possess a GPU nor the nvcc compiler. The CPU version of the backend is equally valid and provides the same functionality.

Run the code with

$ ./blas_b 

and when prompted for input vector sizes type for example 3 100 100 10 which makes a grid with 3 polynomial coefficients, 100 cells in x, 100 cells in y and 10 in z. If you compiled for OpenMP, you can set the number of threads with e.g. export OMP_NUM_THREADS=4.

This is a benchmark program to benchmark various elemental functions the library is built on. Go ahead and vary the input parameters and see how your hardware performs. You can compile and run any other program that ends in _t.cu (test programs) or _b.cu (benchmark programs) in feltor/inc/dg in this way.

Now, let us test the mpi setup

You can of course skip this if you don't have mpi installed on your computer. If you intend to use the MPI backend, an implementation library of the mpi standard is required. Per default mpic++ is used for compilation.

 $ cd path/to/feltor/inc/dg
 
 $ make blas_mpib device=omp  # (for MPI+OpenMP)
 # or
 $ make blas_mpib device=gpu # (for MPI+GPU)

Run the code with $ mpirun -n '# of procs' ./blas_mpib then tell how many process you want to use in the x-, y- and z- direction, for example: 2 2 1 (i.e. 2 procs in x, 2 procs in y and 1 in z; total number of procs is 4) when prompted for input vector sizes type for example 3 100 100 10 (number of cells divided by number of procs must be an integer number). If you compiled for MPI+OpenMP, you can set the number of OpenMP threads with e.g. export OMP_NUM_THREADS=2.

Now, we want to compile a simulation program. First, we have to download and install some libraries for I/O-operations.

For data output we use the NetCDF library under an MIT - like license. The underlying HDF5 library also uses a very permissive license. Note that for the mpi versions of applications you need to build hdf5 and netcdf with the --enable-parallel flag. Do NOT use the pnetcdf library, which uses the classic netcdf file format.
Our JSON input files are parsed by JsonCpp distributed under the MIT license (the 0.y.x branch to avoid C++-11 support).

Some desktop applications in FELTOR use the draw library (developed by us also under MIT), which depends on OpenGL (s.a. installation guide) and glfw, an OpenGL development library under a BSD-like license. You don't need these when you are on a cluster.

As in Step 3 you need to create links to the jsoncpp library include path (and optionally the draw library) in your include folder or provide the paths in your config file. We are ready to compile now

 $ cd path/to/feltor/src/toefl # or any other project in the src folder
 
 $ make toeflR device=gpu     # (compile on gpu or omp)
 $ ./toeflR <inputfile.json>  # (behold a live simulation with glfw output on screen)
 # or
 $ make toefl_hpc device=gpu  # (compile on gpu or omp)
 $ ./toefl_hpc <inputfile.json> <outputfile.nc> # (a single node simulation with output stored in a file)
 # or
 $ make toefl_mpi device=omp  # (compile on gpu or omp)
 $ export OMP_NUM_THREADS=2   # (set OpenMP thread number to 1 for pure MPI) 
 $ echo 2 2 | mpirun -n 4 ./toefl_mpi <inputfile.json> <outputfile.json>
 $ # (a multi node simulation with now in total 8 threads with output stored in a file)
 $ # The mpi program will wait for you to type the number of processes in x and y direction before
 $ # running. That is why the echo is there. 

A default input file is located in path/to/feltor/src/toefl/input. All three programs solve the same equations. The technical documentation on what equations are discretized, input/output parameters, etc. can be generated as a pdf with make doc in the path/to/feltor/src/toefl directory.

2. Further reading

Please check out our wiki pages for some general information and user oriented documentation. Moreover, we maintain tex files in every src folder for technical documentation, which can be compiled with make doc in the respective src folder. The developer oriented documentation of the dG library was generated with Doxygen from source code.

3. Contributions and Acknowledgements

For instructions on how to contribute read the wiki page. We gratefully acknowledge contributions from

  • Ralph Kube
  • Eduard Reiter
  • Lukas Einkemmer

We further acknowledge support on the Knights landing architecture from the High Level Support Team from

  • Albert Gutiérrez
  • Xavier Saez

and from Intel Barcelona

  • Harald Servat

4. License

FELTOR is free software and licensed under the very permissive MIT license. It was originally developed by Matthias Wiesenberger and Markus Held.

Official releases

Our latest code release has a shiny DOI badge from zenodo

DOI

which makes us officially citable.

About

Numerical methods for edge and SOL blob and turbulence simulations. Documentation at:

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Cuda 49.2%
  • C++ 47.8%
  • JavaScript 1.4%
  • Makefile 1.0%
  • TeX 0.5%
  • Shell 0.1%