Environment List
All accessible accounts and environments
Your Accounts
API Quick Reference
Authenticate and load a store to see API endpoints and field reference here.
q=*&rows=100
No sorting applied
No dimensions selected
No metrics selected (count is always available)
Ready to Query
Build your query above and click "Run Query" to see results
Simulated Workloads
Account
Environment Detail
Use Environment List to choose an environment.
Data Management
Schema, stores, and data publishing
Schema Properties
Publishing writes documents to the data lake where they are indexed and queryable by all stores. Use this for quick inserts — paste JSON or upload a file. For bulk loading with schema detection, transforms, and error handling, use AELTL instead.
1. Enter Data
2. Preview & Publish
API Products
Create clean-room products, policies, templates, keys, and generated API contracts.
Products
Select a Product
Choose or create a product to start.
Product Details
Policy
Templates
Keys
Try Query
{ "note": "Run a query to preview runtime response." }
Generated OpenAPI
{ "note": "Click Fetch OpenAPI to preview spec." }
Audience Builder
Build, estimate, save, and export high-value audiences in seconds.
Definition
Filters
Live Estimate
ReadyUpdated: -
Audience Quality
-Age
Income
Top States
Channel
Filter Impact
Attribute Explorer
Saved Audiences
| Name | Dataset | Estimated Size | Updated | Actions |
|---|---|---|---|---|
| No audiences saved yet. | ||||
Users
Manage users and access control
Environment Health
Live runtime health for this environment
Environment
-
Readiness
Checks
Settings
Account and application settings
Account
Ops Account
CORS
Appearance
API
https://ops.minusonedb.com
Session
AELTL
Archive, Extract, Load, Transform (&Fix) and then Load your Data.
Browse existing schema or add new properties.
Add a property to the schema.
| Column | Detected Type | M1DB Type | Confidence | Sample | Issues |
|---|
Denorm combines related datasets into query-ready documents. By denormalizing at ingest with minusonedb, you avoid painful query-time joins.
Historically, joining a whole dataset has been too computationally expensive, but m1db's architecture turns join workloads into cheap, virtually instant index lookups. It lets you validate join quality before publishing, so you can catch missing keys, fanout explosions, and ambiguous matches early.
Persistent rules that apply to all future loads automatically.
No transform rules defined yet.
Rules are created from the Work Queue when fixing errors, or you can add them manually below.
Drag files here or
CSV, JSON, JSONL, TSV, Parquet — including .gz compressed
Custom Credentials (optional)
Leave empty to use server-configured credentials
Credentials (optional)
Databricks creds are used only for this request and not stored. Prefer env vars: ELTL_DATABRICKS_HOST, ELTL_DATABRICKS_TOKEN, ELTL_DATABRICKS_WAREHOUSE_ID.
Prefer env vars: ELTL_BIGQUERY_PROJECT_ID, ELTL_BIGQUERY_SERVICE_ACCOUNT_JSON.
Auth: PAT or key-pair JWT. Prefer env vars: ELTL_SNOWFLAKE_HOST, ELTL_SNOWFLAKE_PAT, or ELTL_SNOWFLAKE_USER + ELTL_SNOWFLAKE_PRIVATE_KEY[_PATH] (optional ELTL_SNOWFLAKE_ACCOUNT, ELTL_SNOWFLAKE_PRIVATE_KEY_PASSPHRASE, ELTL_SNOWFLAKE_PUBLIC_KEY_FINGERPRINT).
Schedule (Extract)
Scheduling runs only when a long-running server is active (e.g., node server.js). Schedule config cannot store tokens; use env vars for connector auth (for example Snowflake PAT + Databricks token).
We call MCP tools server-side and import returned rows as an AELTL source.
Errors from past loads that need attention.
Move curated rows from this environment into downstream destinations.
Store -> Warehouse
Warehouse reload targets currently support Databricks SQL. Add multiple destinations to fan out one source query to many tables. Schedules run on a long-running local server and use server env vars for credentials.
Store -> S3
Custom Credentials (optional)
Exports queried rows to one or more S3 objects using optional transform rules. Key templates support {{store}} and {{timestamp}}.