Skip to content

Working with Databases

Connecting

dql query --db my_database.db "users(*)"

The --db flag is accepted globally (before the subcommand) or within a subcommand.

Creating a new database

If the file does not exist, pass --make-new-db-if-missing:

dql query --db fresh.db --make-new-db-if-missing "select 1"

Attaching databases

--attach adds an external database under a schema name:

dql query --db main.db --attach other.db "other.some_table(*)"

Supported backends

The default build supports SQLite. DuckDB is available as an optional build feature (--features duckdb) and is auto-detected by file extension (.duckdb, .ddb) or magic number.

Pipe backends

Pipe backends connect to external database CLIs as coprocesses. dql spawns the CLI once, keeps stdin/stdout pipes open, and sends SQL through them. Session state persists across queries.

The backend CLI must be installed and on $PATH.

URI scheme

--db pipe://<profile> [target]

The profile determines which binary to spawn and how to parse its output. The target is an opaque string passed to the profile.

Built-in profiles

Profile Binary Target Notes
sqlite3 sqlite3 Path to database file CSV with headers
spatialite spatialite Path to database file SpatiaLite-enabled SQLite
osqueryi osqueryi (none) Queries the local OS
duckdb duckdb Path to database file DuckDB CLI
psql psql Connection string PostgreSQL

Examples

# osquery
dql query --db pipe://osqueryi "processes(*), pid < 100"

# SpatiaLite
dql query --db pipe://spatialite /path/to/geo.db "spatial_ref_sys(*), #<5"

# sqlite3 CLI (not the built-in driver)
dql query --db pipe://sqlite3 /tmp/data.db "users(*)"

# PostgreSQL
dql query --db pipe://psql postgres://user@localhost/mydb "orders(*)"

The lcd profile

For CLIs without a built-in profile, lcd (lowest common denominator) is a raw escape hatch. The target is the full command line to spawn:

dql query --db pipe://lcd "spatialite_tool --csv /path/to/geo.db"

lcd sends SQL on stdin and reads CSV on stdout. No setup commands are sent.

Pipe communication

Each query is framed with sentinel statements:

SELECT '__DQL_FRAME_START_<uuid>';
<query>;
SELECT '__DQL_FRAME_END_<uuid>';

dql reads stdout until the end sentinel, then parses the rows between markers. The UUID is per-query to prevent collisions.

Profile setup commands

Built-in profiles send setup commands at process start. Example for sqlite3:

.mode csv
.headers on
.separator ,

Custom profiles

Define a profile with a TOML file:

dql query --db pipe://myprofile --pipe-profile /path/to/myprofile.toml ...
[pipe]
binary = "my-sql-cli"
target_mode = "positional"     # "positional", "flag", or "none"
target_flag = ""               # used when target_mode = "flag"

[pipe.setup]
commands = [
    "SET output_format csv;",
    "SET show_headers true;",
]

[pipe.output]
format = "csv"                 # "csv" or "tsv"
headers = true                 # first row is column names
null_value = "NULL"            # string representing SQL NULL

[pipe.env]
MY_DB_ACCOUNT = "acct-123"
MY_DB_TOKEN   = "secret"

Profile resolution

pipe://name is resolved in order:

  1. Built-in profile (hardcoded in the binary).
  2. User config file at ~/.config/delightql/pipes/<name>.toml.

If both exist, [pipe.env] from the TOML is merged onto the built-in profile. This lets credentials live in a config file while the profile definition stays built-in.

Example ~/.config/delightql/pipes/snowflake.toml:

[pipe.env]
SNOWFLAKE_ACCOUNT          = "FHYMWGY-RJ68162"
SNOWFLAKE_USER             = "DOEKLUND"
SNOWFLAKE_PRIVATE_KEY_PATH = "~/.ssh/rsa_key.p8"
SNOWFLAKE_DATABASE         = "DELIGHTQL"
SNOWFLAKE_SCHEMA           = "USERS_PRODUCTS"
SNOWFLAKE_WAREHOUSE        = "COMPUTE_WH"

Overriding setup commands

# Replace defaults
dql query --db pipe://sqlite3 /tmp/data.db \
    --pipe-setup ".mode tabs" --pipe-setup ".headers off"

# From a file
dql query --db pipe://sqlite3 /tmp/data.db \
    --pipe-setup-file my_setup.txt

--pipe-setup and --pipe-setup-file replace the profile’s built-in setup entirely.

Limitations

  • Type fidelity – All values arrive as text. Types are inferred from schema introspection when possible.
  • Binary data – BLOBs may not round-trip through CSV encoding.
  • Synchronization – CLIs that buffer output may desynchronize framing.
  • Startup cost – First query pays CLI startup time. Subsequent queries reuse the warm process.

When to use pipe backends

Use native drivers (SQLite, DuckDB) when available. Use pipe backends when:

  • The database has no Rust driver (osquery).
  • The driver is impractical to compile in (SpatiaLite, PostGIS).
  • You already have the CLI installed and want to avoid additional setup.

See Also

dql-query(1)