xorbits.datasets.from_huggingface#

xorbits.datasets.from_huggingface(path: str, name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, split: Optional[Union[str, datasets.splits.Split]] = None, cache_dir: Optional[str] = None, features: Optional[datasets.features.features.Features] = None, download_config: Optional[datasets.download.download_config.DownloadConfig] = None, download_mode: Optional[Union[datasets.download.download_manager.DownloadMode, str]] = None, verification_mode: Optional[Union[datasets.utils.info_utils.VerificationMode, str]] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, revision: Optional[Union[str, datasets.utils.version.Version]] = None, token: Optional[Union[bool, str]] = None, streaming: bool = False, num_proc: Optional[int] = None, storage_options: Optional[Dict] = None, **config_kwargs) xorbits.datasets.dataset.Dataset[source]#

Create a dataset from a Hugging Face Datasets Dataset.

This function is parallelized, and is intended to be used with Hugging Face Datasets that are loaded into memory (as opposed to memory-mapped).

Parameters
  • path (str) –

    Path or name of the dataset. Depending on path, the dataset builder that is used comes from a generic dataset script (JSON, CSV, Parquet, text etc.) or from the dataset script (a python file) inside the dataset directory.

    For local datasets:

    • if path is a local directory (containing data files only) -> load a generic dataset builder (csv, json, text etc.) based on the content of the directory e.g. ‘./path/to/directory/with/my/csv/data’.

    • if path is a local dataset script or a directory containing a local dataset script (if the script has the same name as the directory) -> load the dataset builder from the dataset script e.g. ‘./dataset/squad’ or ‘./dataset/squad/squad.py’.

    For datasets on the Hugging Face Hub (list all available datasets with [huggingface_hub.list_datasets])

    • if path is a dataset repository on the HF hub (containing data files only) -> load a generic dataset builder (csv, text etc.) based on the content of the repository e.g. ‘username/dataset_name’, a dataset repository on the HF hub containing your data files.

    • if path is a dataset repository on the HF hub with a dataset script (if the script has the same name as the directory) -> load the dataset builder from the dataset script in the dataset repository e.g. glue, squad, ‘username/dataset_name’, a dataset repository on the HF hub containing a dataset script ‘dataset_name.py’.

  • name (str, optional) – Defining the name of the dataset configuration.

  • data_dir (str, optional) – Defining the data_dir of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets and data_files is None, the behavior is equal to passing os.path.join(data_dir, **) as data_files to reference all the files in a directory.

  • data_files (str or Sequence or Mapping, optional) – Path(s) to source data file(s).

  • split (Split or str) – Which split of the data to load. If None, will return a dict with all splits (typically datasets.Split.TRAIN and datasets.Split.TEST). If given, will return a single Dataset. Splits can be combined and specified like in tensorflow-datasets.

  • cache_dir (str, optional) – Directory to read/write data. Defaults to “~/.cache/huggingface/datasets”.

  • features (Features, optional) – Set the features type to use for this dataset.

  • download_config ([DownloadConfig], optional) – Specific download configuration parameters.

  • download_mode ([DownloadMode] or str, defaults to REUSE_DATASET_IF_EXISTS) – Download/generate mode.

  • verification_mode ([VerificationMode] or str, defaults to BASIC_CHECKS) –

    Verification mode determining the checks to run on the downloaded/processed dataset information (checksums/size/splits/…).

    New in version 2.9.1(datasets).

  • keep_in_memory (bool, defaults to None) – Whether to copy the dataset in-memory. If None, the dataset will not be copied in-memory unless explicitly enabled by setting datasets.config.IN_MEMORY_MAX_SIZE to nonzero. See more details in the [improve performance](../cache#improve-performance) section.

  • save_infos (bool, defaults to False) – Save the dataset information (checksums/size/splits/…).

  • revision ([Version] or str, optional) – Version of the dataset script to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository.

  • token (str or bool, optional) – Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True, or not specified, will get token from “~/.huggingface”.

  • streaming (bool, defaults to False) –

    If set to True, don’t download the data files. Instead, it streams the data progressively while iterating on the dataset. An [IterableDataset] or [IterableDatasetDict] is returned instead in this case.

    Note that streaming works for datasets that use data formats that support being iterated over like txt, csv, jsonl for example. Json files may be downloaded completely. Also streaming from remote zip or gzip files is supported but other compressed formats like rar and xz are not yet supported. The tgz format doesn’t allow streaming.

  • num_proc (int, optional, defaults to None) –

    Number of processes when downloading and generating the dataset locally. Multiprocessing is disabled by default.

    New in version 2.7.0(datasets).

  • storage_options (dict, optional, defaults to None) –

    Experimental. Key/value pairs to be passed on to the dataset file-system backend, if any.

    New in version 2.11.0(datasets).

  • **config_kwargs (additional keyword arguments) – Keyword arguments to be passed to the BuilderConfig and used in the [DatasetBuilder].

Returns

Dataset

This docstring was copied from datasets.