Database APIs

Create a Database Export Job

To export a database, create a database export job:

POST https://app.pixiebrix.com/api/databases/records/jobs/

The payload should contain an export job request:

{
  // A name/label for the export. Will be included export artifact name
  name: string,
  // One or more databases from which to export
  databases: UUID[],
  // Export type, defaults to "application/json"
  media_type?: "application/json" | "application/xlsx" | "text/csv",
  filters?: {
    // ISO8601 timestamp
    start_date?: Timestamp,
    end_date?: Timestamp,
  }
}

PixieBrix will respond with a job:

{"id":"6a3a88cc-797d-40c1-82ce-9fd15e0e3d02","status":"STARTED"}

Call the GET https://app.pixiebrix.com/api/databases/records/jobs/:id/ endpoint to fetch the job details.

Upon success, the job detail will include a data link to the exported data.

Retrieve Organization Database Records

You can retrieve up to 10,000 records per request via the records endpoint. To bulk export records across one or more databases, Create a Database Export Job.

To retrieve organization database records (e.g., for use with a Business Intelligence tool such as Tableau or PowerBI), use the databases and records endpoints:

  • GET /api/organizations/:organizationId/databases/: list the Organization's databases

  • GET /api/databases/:databaseId/records/: lists the contents of a database

Filtering Records

The endpoint supports the following optional query parameters to filter by record creation date:

  • start_date: the YYYY-MM-DD for data to include (inclusive)

  • end_date: the YYYY-MM-DD for data to include (inclusive)

Both _date query parameters are optional. For example, to get all records since a date, use the start_date query parameter by itself:

  • /api/databases/:databaseId/records/?start_date=2022-05-19

The endpoint also supports filtering by a string data value by using a data__ param:

  • data__field_name=value

To filter for null vs. non-null, use the __isnull suffix:

  • data__field_name__isnull=true

For example:

  • data__first_name=david

  • Field names are case-sensitive and values should be URL encoded. For example, to filter by “user email”: data__user%20email%3Ddavid%40pixiebrix.com

Retrieve Archived Organization Database Records

The response is a list objects. Each object includes a file which consists of all records that were updated on that particular date in JSONL format:

[
  {"file": "2023-01-04.jsonl", ...},
  {"file": "2023-01-02.jsonl", ...},
  {"file": "2023-01-01.jsonl", ...},
]

The list is ordered by date is descending order by default. You can list by date in ascending order using the ordering=date query param:

# GET /api/organizations/:organizationId/databases/:databaseId/record-archives/?ordering=date
[
  {"file": "2023-01-01.jsonl", ...},
  {"file": "2023-01-02.jsonl", ...},
  {"file": "2023-01-04.jsonl", ...},
]

If a date is skipped, like “2023-01-03” is in the example above, then there are no records for that date.

Every file is accessible for download for 1 hour. After time expires, you will have to hit the endpoint again to get a fresh S3 link to download it.

Note, the archive does NOT reflect all changes to a Record. The archive file for a given date is generated after the date has ended. Therefore, if a Record is written to 100 times on a given date, then the archive will only display the last update to that Record.

Data Retention Policy

Every database has a limit of 500,000 records. Once that limit is exceeded, a nightly cron job will archive all records from the oldest date(s) to S3 until the database’s record count falls back under the limit. Archived records are only accessible through the Database Record Archive API. They are not accessible by the Database Record API described in Retrieve Organization Database Records.

Last updated