xorbits.pandas.DataFrame.to_gbq#

DataFrame.to_gbq(destination_table: str, project_id: str | None = None, chunksize: int | None = None, reauth: bool = False, if_exists: ToGbqIfexist = 'fail', auth_local_webserver: bool = True, table_schema: list[dict[str, str]] | None = None, location: str | None = None, progress_bar: bool = True, credentials=None) None[source]#

Write a DataFrame to a Google BigQuery table.

This function requires the pandas-gbq package.

See the How to authenticate with Google BigQuery guide for authentication instructions.

Parameters
  • destination_table (str) – Name of table to be written, in the form dataset.tablename.

  • project_id (str, optional) – Google BigQuery Account project ID. Optional when available from the environment.

  • chunksize (int, optional) – Number of rows to be inserted in each chunk from the dataframe. Set to None to load the whole dataframe at once.

  • reauth (bool, default False) – Force Google BigQuery to re-authenticate the user. This is useful if multiple accounts are used.

  • if_exists (str, default 'fail') –

    Behavior when the destination table exists. Value can be one of:

    'fail'

    If table exists raise pandas_gbq.gbq.TableCreationError.

    'replace'

    If table exists, drop it, recreate it, and insert data.

    'append'

    If table exists, insert data. Create if does not exist.

  • auth_local_webserver (bool, default True) –

    Use the local webserver flow instead of the console flow when getting user credentials.

    New in version 0.2.0 of pandas-gbq.

    Changed in version 1.5.0(pandas): Default value is changed to True. Google has deprecated the auth_local_webserver = False “out of band” (copy-paste) flow.

  • table_schema (list of dicts, optional) –

    List of BigQuery table fields to which according DataFrame columns conform to, e.g. [{'name': 'col1', 'type': 'STRING'},...]. If schema is not provided, it will be generated according to dtypes of DataFrame columns. See BigQuery API documentation on available names of a field.

    New in version 0.3.1 of pandas-gbq.

  • location (str, optional) –

    Location where the load job should run. See the BigQuery locations documentation for a list of available locations. The location must match that of the target dataset.

    New in version 0.5.0 of pandas-gbq.

  • progress_bar (bool, default True) –

    Use the library tqdm to show the progress bar for the upload, chunk by chunk.

    New in version 0.5.0 of pandas-gbq.

  • credentials (google.auth.credentials.Credentials, optional) –

    Credentials for accessing Google APIs. Use this parameter to override default credentials, such as to use Compute Engine google.auth.compute_engine.Credentials or Service Account google.oauth2.service_account.Credentials directly.

    New in version 0.8.0 of pandas-gbq.

See also

pandas_gbq.to_gbq

This function in the pandas-gbq library.

read_gbq

Read a DataFrame from Google BigQuery.

Examples

Example taken from Google BigQuery documentation

>>> project_id = "my-project"  
>>> table_id = 'my_dataset.my_table'  
>>> df = pd.DataFrame({  
...                   "my_string": ["a", "b", "c"],
...                   "my_int64": [1, 2, 3],
...                   "my_float64": [4.0, 5.0, 6.0],
...                   "my_bool1": [True, False, True],
...                   "my_bool2": [False, True, False],
...                   "my_dates": pd.date_range("now", periods=3),
...                   }
...                   )
>>> df.to_gbq(table_id, project_id=project_id)  

Warning

This method has not been implemented yet. Xorbits will try to execute it with pandas.

This docstring was copied from pandas.core.frame.DataFrame.