3.1.2. nidata.core.fetchers package¶
3.1.2.1. Submodules¶
3.1.2.2. nidata.core.fetchers.aws_fetcher module¶
-
class
nidata.core.fetchers.aws_fetcher.
AmazonS3Fetcher
(*args, **kwargs)[source]¶ Bases:
nidata.core.fetchers.base.Fetcher
Methods
-
dependencies
= ['boto']¶
-
3.1.2.3. nidata.core.fetchers.base module¶
Utilities to download NeuroImaging datasets
-
class
nidata.core.fetchers.base.
Fetcher
(*args, **kwargs)[source]¶ Bases:
nidata.core.objdep.ClassWithDependencies
Methods
-
dependencies
= []¶
-
-
nidata.core.fetchers.base.
chunk_report
(bytes_so_far, total_size, initial_size, t0)[source]¶ Show downloading percentage.
Parameters: bytes_so_far: int
Number of downloaded bytes
total_size: int
Total size of the file (may be 0/None, depending on download method).
t0: int
The time in seconds (as returned by time.time()) at which the download was resumed / started.
initial_size: int
If resuming, indicate the initial size of the file. If not resuming, set to zero.
-
nidata.core.fetchers.base.
filter_columns
(array, filters, combination='and')[source]¶ Return indices of recarray entries that match criteria.
Parameters: array: numpy array with columns
Array in which data will be filtered
filters: list of criteria
See _filter_column
combination: string, optional
String describing the combination operator. Possible values are “and” and “or”.
3.1.2.4. nidata.core.fetchers.http_fetcher module¶
-
class
nidata.core.fetchers.http_fetcher.
HttpFetcher
(*args, **kwargs)[source]¶ Bases:
nidata.core.fetchers.base.Fetcher
Methods
-
nidata.core.fetchers.http_fetcher.
fetch_files
(data_dir, files, resume=True, force=False, verbose=1, delete_archive=True)[source]¶ Load requested dataset, downloading it if needed or requested.
This function retrieves files from the hard drive or download them from the given urls. Note to developers: All the files will be first downloaded in a sandbox and, if everything goes well, they will be moved into the folder of the dataset. This prevents corrupting previously downloaded data. In case of a big dataset, do not hesitate to make several calls if needed.
Parameters: dataset_name: string
Unique dataset name
files: list of (string, string, dict)
List of files and their corresponding url. The dictionary contains options regarding the files. Options supported are ‘uncompress’ to indicates that the file is an archive, ‘md5sum’ to check the md5 sum of the file and ‘move’ if renaming the file or moving it to a subfolder is needed.
data_dir: string, optional
Path of the data directory. Used to force data storage in a specified location. Default: None
resume: bool, optional
If true, try resuming download if possible
mock: boolean, optional
If true, create empty files if the file cannot be downloaded. Test use only.
verbose: int, optional
verbosity level (0 means no message).
Returns: files: list of string
Absolute paths of downloaded files on disk
3.1.2.5. Module contents¶
-
class
nidata.core.fetchers.
AmazonS3Fetcher
(*args, **kwargs)[source]¶ Bases:
nidata.core.fetchers.base.Fetcher
Methods
-
dependencies
= ['boto']¶
-
-
class
nidata.core.fetchers.
HttpFetcher
(*args, **kwargs)[source]¶ Bases:
nidata.core.fetchers.base.Fetcher
Methods
-
class
nidata.core.fetchers.
Fetcher
(*args, **kwargs)[source]¶ Bases:
nidata.core.objdep.ClassWithDependencies
Methods
-
dependencies
= []¶
-