Skip to content

remove old docs #233

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 9, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 0 additions & 17 deletions docs/source/intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,6 @@ Pandas supports all these `BigQuery data types <https://cloud.google.com/bigquer
``TIMESTAMP`` (microsecond precision). Data types ``BYTES`` and ``RECORD``
are not supported.

Integer and boolean ``NA`` handling
+++++++++++++++++++++++++++++++++++

Since all columns in BigQuery queries are nullable, and NumPy lacks of ``NA``
support for integer and boolean types, this module will store ``INTEGER`` or
``BOOLEAN`` columns with at least one ``NULL`` value as ``dtype=object``.
Otherwise those columns will be stored as ``dtype=int64`` or ``dtype=bool``
respectively.

This is opposite to default pandas behaviour which will promote integer
type to float in order to store NAs.
`See here for how this works in pandas <https://pandas.pydata.org/pandas-docs/stable/gotchas.html#nan-integer-na-values-and-na-type-promotions>`__

While this trade-off works well for most cases, it breaks down for storing
values greater than 2**53. Such values in BigQuery can represent identifiers
and unnoticed precision lost for identifier is what we want to avoid.

Logging
+++++++

Expand Down
3 changes: 2 additions & 1 deletion docs/source/reading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ your job. For more information about query configuration parameters see `here
.. note::

The ``dialect`` argument can be used to indicate whether to use BigQuery's ``'legacy'`` SQL
or BigQuery's ``'standard'`` SQL (beta). The default value is ``'legacy'``. For more information
or BigQuery's ``'standard'`` SQL (beta). The default value is ``'legacy'``, though this will change
in a subsequent release to ``'standard'``. For more information
on BigQuery's standard SQL, see `BigQuery SQL Reference
<https://cloud.google.com/bigquery/sql-reference/>`__
6 changes: 0 additions & 6 deletions docs/source/tables.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,3 @@ Creating Tables
{'name': 'my_int64', 'type': 'INTEGER'},
{'name': 'my_string', 'type': 'STRING'}]}

.. note::

If you delete and re-create a BigQuery table with the same name, but different table schema,
you must wait 2 minutes before streaming data into the table. As a workaround, consider creating
the new table with a different name. Refer to
`Google BigQuery issue 191 <https://code.google.com/p/google-bigquery/issues/detail?id=191>`__.
5 changes: 0 additions & 5 deletions docs/source/writing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,11 +49,6 @@ a ``TableCreationError`` if the destination table already exists.
If an error occurs while streaming data to BigQuery, see
`Troubleshooting BigQuery Errors <https://cloud.google.com/bigquery/troubleshooting-errors>`__.

.. note::

The BigQuery SQL query language has some oddities, see the
`BigQuery Query Reference Documentation <https://cloud.google.com/bigquery/query-reference>`__.

.. note::

While BigQuery uses SQL-like syntax, it has some important differences
Expand Down