When talking about Large Scale Data Management, one of the most basic questions is how to transfer the data from a local cache to some data facility and vice versa.
Depending on the user and community the requirements for data transfer are as different as they only can be. The most common requirement is a high performance, followed by a high level of fault tolerance and security. The fulfillment of these claims is influenced by many factors, e.g. the average file size, access frequency and patterns and the location of access.

During the last years, the expert group "Software Methods" has gained extensive experience in the area of highly optimized data transfer technologies. Users can be supported while selecting from a broad spectrum of well-known data transfer protocols providing best results when used in an optimal fashion depending on the aimed use case. Additional support is provided by generic software solutions like our Abstract Data Access Layer API (ADALAPI), which allows to switch between transfer protocols fully transparently or an ADALAPI-based transfer client, supporting multi-threaded file transfers, resuming of failed transfers and pre-/post-transfer operations, e.g. checksumming.

ADALAPI

Copyright by SWM, KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft
Templates Joomla 1.7 by Wordpress themes free