Main Page: Difference between revisions

From XdmfWeb
Jump to navigationJump to search
No edit summary
No edit summary
 
(33 intermediate revisions by 10 users not shown)
Line 1: Line 1:
<big>e'''X'''tensible '''D'''ata '''M'''odel and '''F'''ormat</big>
[[Image:XdmfLogo1.gif]]


<span style='color:blue'><big>e<span style='color:red'>'''X'''</span>tensible <span style='color:red'>'''D'''</span>ata <span style='color:red'>'''M'''</span>odel and <span style='color:red'>'''F'''</span>ormat</big></span>




The need for a standardized method to exchange scientific data between High Performance Computing codes and tools lead to the development of the eXtensible Data Model and Format (XDMF) . Uses for XDMF range from a standard format used by HPC codes to take advantage of widely used visualization programs like ParaView and EnSight, to a mechanism for performing coupled calculations using multiple, previously stand alone codes.


XDMF categorizes data by two main attributes; size and function. Data can be Light (typically less than about a thousand values) of Heavy (megabytes, terabytes, etc.). In addition to raw values, data can refer to Format (rank and dimensions of an array) or Model (how that data is to be used. i.e. XYZ coordinates vs. Vector components).
The need for a standardized method to exchange scientific data between High Performance Computing codes and tools lead to the development of the eXtensible Data Model and Format (XDMF) . Uses for XDMF range from a standard format used by HPC codes to take advantage of widely used visualization programs like ParaView, to a mechanism for performing coupled calculations using multiple, previously stand alone codes.
 
Data format refers to the raw data to be manipulated. Information like number type ( float, integer, etc.), precision, location, rank, and dimensions completely describe the any dataset regardless of its size. The description of the data is also separate from the values themselves. We refer to the description of the data as '''Light''' data and the values themselves as '''Heavy''' data. Light data is small and can be passed between modules easily. Heavy data may be potentially enormous; movement needs to be kept to a minimum. Due to the different nature of heavy and light data, they are stored using separate mechanisms. Light data is stored using XML, Heavy data is typically stored using HDF5. While we could have chosen to store the light data using HDF5 attributes using XML does not require every tool to have access to the compiled HDF5 libraries in order to perform simple operations.
 
Data model refers to the intended use of the data. For example, a three dimensional array of floating point vales may be the X,Y,Z geometry for a grid or calculated vector values. Without a data model, it is impossible to tell the difference. Since the data model only describes the data, it is purely light data and thus stored using XML. It is targeted at scientific simulation data concentrating on scalars, vectors, and tensors defined on some type of computational grid. Structured and Unstructured grids are described via their topology and geometry. Calculated, time varying data values are described as attributes of the grid. The actual values for the grid geometry, connectivity, and attribute values are contained in the data format. This separation of data format and model allows HPC codes to efficiently produce and store vales in a convenient manner without being encumbered by our data model which may be different from their internal arrangement.
 


XDMF uses XML to store Light data and to describe the data Model. HDF5 is used to store Heavy data. The data Format is stored redundantly in both XML and HDF5. This allows tools to parse XML to determine the resources that will be required to access the Heavy data.  
XDMF uses XML to store Light data and to describe the data Model. HDF5 is used to store Heavy data. The data Format is stored redundantly in both XML and HDF5. This allows tools to parse XML to determine the resources that will be required to access the Heavy data.  
The data model in XDMF stored in XML provides the knowledge of what is represented by the Heavy data. In this model, HPC data is viewed as a hierarchy of Domains. A Domain must contain at least one Grid. A Grid is the basic representation of both the geometric and computed/measured values. A Grid is considered to be a group of elements with Structured or Unstructured Topology and their associated values. In addition to the topology of the Grid, Geometry, specifying the X, Y, and Z positions of the Grid is required. Finally, a Grid may have one or more Attributes.  Attributes are used to store any other value associated with the grid and may be referenced to the Grid or to individual cells that comprise the Grid.
The concept of separating the light data from the heavy data is critical to the performance of this data model and format. HPC codes can read and write data in large, contiguous chunks that are natural to their internal data storage, to achieve optimal I/O performance. If codes were required to significantly re-arrange data prior to I/O operations, data locality, and thus performance, could be adversely affected, particularly on codes that attempt to make maximum use of memory cache. The complexity of the dataset is described in the light data portion, which is small and transportable. For example, the light data might specify a topology of one million hexaherda while the heavy data would contain the geometric XYZ values of the mesh and pressure values at the cell centers stored in large, contiguous arrays. This key feature will allow reusable tools to be built that do not put onerous requirements on HPC codes. Despite the complexity of the organization described in the XML below, the HPC code only needs to produce the three HDF5 datasets for geometry, connectivity, and pressure values.


While not required, a C++ API is provided to read and write XDMF data. This API has also been wrapped so it is available from popular languages like Python, Tcl, and Java. The API is not necessary in order to produce or consume XDMF data. Currently several HPC codes that already produced HDF5 data, use native text output to produce the XML necessary for valid XDMF.  
While not required, a C++ API is provided to read and write XDMF data. This API has also been wrapped so it is available from popular languages like Python, Tcl, and Java. The API is not necessary in order to produce or consume XDMF data. Currently several HPC codes that already produced HDF5 data, use native text output to produce the XML necessary for valid XDMF.  


[[XDMF Model and Format]]
'''More Detail'''


[[XDMF API]]
*[[XDMF Model and Format]]


*[[XDMF API]]


'''How do I ...'''
# [[Get Xdmf]]
# [[Read Xdmf]]
# [[Write Xdmf]]
# [[Validate Xdmf's XML]]
# [[Xdmf3 Fortran API|Write from Fortran]]
# [[Read from MySQL]]
# [[Parallel IO with MPI]]


'''Data Format Examples'''
* The files for the xdmf3 regression test suite can be obtained from a VTK build tree in $BLDTREE/ExternalData/Testing/Data/XDMF
* Generate or read in data to ParaView, and save it into XDMF format.
* [http://www.paraview.org/Wiki/ParaView/Data_formats#Reading_a_time_varying_Raw_file_into_Paraview| time varying binary data dumps]
* [[examples/imagedata | ImageData from h5 array]]


'''History and Road Map'''
* [[Version 1]]
* [[Version 2]]
* [[Version 3]]
* [[New Features]]
* [[V3_Road_Feature_Request|Feature Requests]]
* [[V2_To_V3|V2 to V3 Transition and bugs]]




'''Mailing list'''
* Join the Xdmf mailing list [http://www.kitware.com/cgi-bin/mailman/listinfo/xdmf here]


'''Bug reports'''
* To report a bug or request a feature, go to the [https://gitlab.kitware.com/xdmf/xdmf/issues GitLab issue tracker].
* The GitLab issue tracker is in use since September 2016. Bugs reported before this can be found from the [http://public.kitware.com/Bug/view_all_bug_page.php?project_id=4 Mantis Bugtracking system].


=== Web site administration ===


'''Wiki Account'''
* Please improve the pages! Send an email to Dave DeMarle at kitware.com with XDMF in the subject that includes your preferred user name and email address in the message body to do so.


* [http://www.mediawiki.org/wiki/Help:Configuration_settings Configuration settings list]
* [http://www.mediawiki.org/wiki/Help:Configuration_settings Configuration settings list]
* [http://www.mediawiki.org/wiki/Help:FAQ MediaWiki FAQ]
* [http://www.mediawiki.org/wiki/Help:FAQ MediaWiki FAQ]
* [http://mail.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [http://mail.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]

Latest revision as of 13:15, 13 March 2017

XdmfLogo1.gif

eXtensible Data Model and Format


The need for a standardized method to exchange scientific data between High Performance Computing codes and tools lead to the development of the eXtensible Data Model and Format (XDMF) . Uses for XDMF range from a standard format used by HPC codes to take advantage of widely used visualization programs like ParaView, to a mechanism for performing coupled calculations using multiple, previously stand alone codes.

Data format refers to the raw data to be manipulated. Information like number type ( float, integer, etc.), precision, location, rank, and dimensions completely describe the any dataset regardless of its size. The description of the data is also separate from the values themselves. We refer to the description of the data as Light data and the values themselves as Heavy data. Light data is small and can be passed between modules easily. Heavy data may be potentially enormous; movement needs to be kept to a minimum. Due to the different nature of heavy and light data, they are stored using separate mechanisms. Light data is stored using XML, Heavy data is typically stored using HDF5. While we could have chosen to store the light data using HDF5 attributes using XML does not require every tool to have access to the compiled HDF5 libraries in order to perform simple operations.

Data model refers to the intended use of the data. For example, a three dimensional array of floating point vales may be the X,Y,Z geometry for a grid or calculated vector values. Without a data model, it is impossible to tell the difference. Since the data model only describes the data, it is purely light data and thus stored using XML. It is targeted at scientific simulation data concentrating on scalars, vectors, and tensors defined on some type of computational grid. Structured and Unstructured grids are described via their topology and geometry. Calculated, time varying data values are described as attributes of the grid. The actual values for the grid geometry, connectivity, and attribute values are contained in the data format. This separation of data format and model allows HPC codes to efficiently produce and store vales in a convenient manner without being encumbered by our data model which may be different from their internal arrangement.


XDMF uses XML to store Light data and to describe the data Model. HDF5 is used to store Heavy data. The data Format is stored redundantly in both XML and HDF5. This allows tools to parse XML to determine the resources that will be required to access the Heavy data.

The data model in XDMF stored in XML provides the knowledge of what is represented by the Heavy data. In this model, HPC data is viewed as a hierarchy of Domains. A Domain must contain at least one Grid. A Grid is the basic representation of both the geometric and computed/measured values. A Grid is considered to be a group of elements with Structured or Unstructured Topology and their associated values. In addition to the topology of the Grid, Geometry, specifying the X, Y, and Z positions of the Grid is required. Finally, a Grid may have one or more Attributes. Attributes are used to store any other value associated with the grid and may be referenced to the Grid or to individual cells that comprise the Grid.

The concept of separating the light data from the heavy data is critical to the performance of this data model and format. HPC codes can read and write data in large, contiguous chunks that are natural to their internal data storage, to achieve optimal I/O performance. If codes were required to significantly re-arrange data prior to I/O operations, data locality, and thus performance, could be adversely affected, particularly on codes that attempt to make maximum use of memory cache. The complexity of the dataset is described in the light data portion, which is small and transportable. For example, the light data might specify a topology of one million hexaherda while the heavy data would contain the geometric XYZ values of the mesh and pressure values at the cell centers stored in large, contiguous arrays. This key feature will allow reusable tools to be built that do not put onerous requirements on HPC codes. Despite the complexity of the organization described in the XML below, the HPC code only needs to produce the three HDF5 datasets for geometry, connectivity, and pressure values.

While not required, a C++ API is provided to read and write XDMF data. This API has also been wrapped so it is available from popular languages like Python, Tcl, and Java. The API is not necessary in order to produce or consume XDMF data. Currently several HPC codes that already produced HDF5 data, use native text output to produce the XML necessary for valid XDMF.

More Detail

How do I ...

  1. Get Xdmf
  2. Read Xdmf
  3. Write Xdmf
  4. Validate Xdmf's XML
  5. Write from Fortran
  6. Read from MySQL
  7. Parallel IO with MPI

Data Format Examples

History and Road Map


Mailing list

  • Join the Xdmf mailing list here

Bug reports

Web site administration

Wiki Account

  • Please improve the pages! Send an email to Dave DeMarle at kitware.com with XDMF in the subject that includes your preferred user name and email address in the message body to do so.